ngnius
Channel Manager
Discord bots are hard
Posts: 80
|
Post by ngnius on Oct 20, 2017 14:28:55 GMT
I'm pretty sure that's already partly the case (AI computer chips are often based off our brain's design), but let's just ignore that for a moment. AlphaGo Zero, the new and better version of Google's AlphaGo, has just beat AlphaGo 100 to 0 in Go. It did this by learning how to win against itself. This also made it way more efficient - it accomplished everything with 1/12th the resources that AlphaGo required. (source: www.foxnews.com/tech/2017/10/20/googles-artificial-intelligence-computer-no-longer-constrained-by-limits-human-knowledge.amp.html ) This very closely mimics how the human brain works - try playing a game that you know all the rules about but have only watched other people play, never played yourself (it won't go well). A lot of other technology we've invented is based on something biological (pump : heart, camera : eye). With that in mind, I'm sure if we got some neuroscientists together (who also knew how to code AI), and a few human brains to use as models, we could get the best AI design out there.
|
|
14flash
Script Writer/Editor
Posts: 100
|
Post by 14flash on Oct 21, 2017 4:12:37 GMT
I disagree with this idea on so many levels.
Firstly, there is a big misconception that because neural networks were inspired by the model of the brain, they must also behave like a brain, which is simply not the case. In NNs, features are fed forward through nodes which collect a weighted sum of all inputs and output based on how the sum compares to some threshold. For training the model, the result is compared to the actual value and the weights are corrected according to gradient descent which propagates backward through the network. There is no evidence that neurons apply the same sum and output behavior that NNs follow and last I heard (I'm trying to find the source again) brains almost certainly don't use back-propagation to "train" themselves. The biological complexity of a brain also adds several layers of abstraction on what NNs refer to as "weighting" connections and even how connections get made or destroyed. Just because some high level features of the two are similar, does not mean the act or can be used in the same way.
Furthermore, using brains as the basis for future AI developments is extremely limiting. While evolution is good at optimizing over billions of years, it's unreasonable to assume that a brain is the pinnacle of learning capabilities in either physical or computational models. We shouldn't be trying to copy nature explicitly by rather we should be trying to realize novel ways of modeling and rationalizing about the world. To use neuroscience as a basis for everything about learning is a handicap.
Also, the brain is still very much a mystery. There have been many advancements recently, but we still know more about what the brain doesn't do than what it does do or how it fundamentally works. There's little advancements that neuroscientists could bring to the table in terms of modeling an actual brain. It's also very difficult to translate all the complexities of a biological organ into a simple machine; the correct level of abstraction simply doesn't exist.
|
|
|
Post by Oriana on Dec 21, 2017 10:05:56 GMT
|
|
14flash
Script Writer/Editor
Posts: 100
|
Post by 14flash on Dec 22, 2017 4:15:04 GMT
(First, my thoughts on that paper)
I realize they were just applying techniques naïvely, but that paper really should have been 10 times as long. It felt like in a lot of cases they were stopping short of where there would be obvious next steps to search for more clues. It would be really interesting to see how these techniques could be combined and iterated to try to find more patterns or ways of modeling the system.
But also, huge props to the authors of that paper. It takes a lot of courage to admit that your field of study is falling short of what it intends to do.
(Okay, now back on topic)
This actually made me realize a completely new reason I wouldn't want a neuroscientist to do AI development (or at least not ML).
A neruoscientist is, just that, a scientist. A scientist seeks to understand some system by a repetitive hypothesis-experiment cycle. This is very good for trying to understand an existing system, but when you need to design a new system, it may not provide enough insights into how to modify the system to produce better results. In other words, the post-experiment analysis will provide several new hypotheses to test instead of narrowing the range of possible hypothesis.
The hypothesis-experiment cycle can still work, though. For small or relatively simple systems, it's proven to be quite effective. For example, the top rated chess engine, Stockfish, still uses Alpha-Beta Tree Search with a manually tuned evaluation function.
But with modern AI development (that is, the Machine Learning kind), we're rarely interested in simple or well-defined systems. Instead, we want a program which is capable of making decision in high-dimensional space quickly and which can accept and interpret new kinds of information readily.
These kinds of problems aren't well suited for science, but they do seem to fit right in with engineering.
|
|
|
Post by Oriana on Jan 5, 2018 3:13:05 GMT
I think a more interesting question might be: Should software engineers study neuroscience?"
And there, I might be more optimistic.
|
|