The Orion's Arm Universe Project Forums





pictures, generated by Google's neural nets
#11
How about this one


Attached Files Thumbnail(s)
   
Reply
#12
You know, I think that actually improves the cover for After Tranquillity... A lot. Makes it way creepier.
Reply
#13
I'm somebody who knew everything there was to know about neural networks back in the 90's. I've been there. AI researcher, worked at a natural-language startup for years, got a couple patents, did conversational robots (that were running on pure reflex action and in truth only about as smart as clams) that major companies used to do customer support, etc.

To say that the art has significantly advanced is an understatement. If you had asked me only a few weeks ago whether it is even possible to train a neural network 30 levels deep, I'd have said no, and cited you the well-known "Vanishing Gradient Problem" that seemed to be an insurmoutable obstacle back when I was doing my work. The only way I could imagine to get past it was to use genetic algorithms to evolve network weights, and that was going to take FOREVER of computer time to produce results.

So when I saw this I was more than a little bit astonished, and went immediately to catch up on the research.

Holy crap. They figured out a whole lot about training neural networks in the last 15 years. I should have expected that, but while I was working it seemed like of one of the Classic Algorithms like sorts and so on, which don't change over time. But it manifestly isn't.

The autoencoder approach to training deep levels completely bypasses the Vanishing Gradient Problem - short of a little finetuning. The Dropout method is by far the best approach to preventing overfitting while not getting in the way of convergence that I've ever reviewed - and it's effing simple. People have figured out productive and useful ways to train recurrent and nonlinear networks. And convolutional application of the deepest layers is a new idea that saves a half-acre of computer time in training and backprop, and dramatically decreases overfitting and sensitivity to irrelevant crap at the input level (at the expense of some extra time spent on forwardpropagation).

And, well, memory being an order of magnitude larger or so doesn't hurt a bit. And neither do massively parallel GPUs optimized for matrix calculations. (speeds up the training process by factors of a thousand, on DESKTOP machines!) And then there's pooling and softmax techniques that didn't exist back when.

We can now do things we could never do before.

More to the point, there is now the MEANS to try out ideas I had fifteen years ago that could, um, lead somewhere very interesting.

I won't claim to have solved strong AI until something sues me for ownership of the hardware it runs on (not bloody likely), but we live in interesting times and I've got a dozen-and-a-half things I want to try that it looks like nobody's tried yet. I think I can leverage these new capabilities in ways that people won't believe.

So I've spent the last week writing code and laughing uncontrollably. I might be who people mean when they say "Mad Scientist...."
Reply
#14
Guys? Strong AI is a serious possibility now. Like, within the next five or ten years. Back in the '90s it looked like it was still 40 years off. But we have to start making decisions about it, now.
Reply
#15
(08-11-2015, 06:45 AM)stevebowers Wrote: How about this one

A good look at the cover and you are definitely after tranquility (that you just lost...).
Stephen
Reply
#16
I'm not sure why this particular neural net dreams about dogs all the time, but there is a definite dog-bias in there.
Reply
#17
(08-11-2015, 11:02 AM)Bear Wrote: I've got a dozen-and-a-half things I want to try that it looks like nobody's tried yet. I think I can leverage these new capabilities in ways that people won't believe.

So I've spent the last week writing code and laughing uncontrollably. I might be who people mean when they say "Mad Scientist...."

Sounds interesting; In that case I hope that some of your projects include AI systems that can help humans with medical research and medical expertise. That would be awesome:

http://www-03.ibm.com/press/us/en/pressr.../47435.wss

http://now.tufts.edu/news-releases/plana...telligence
"Hydrogen is a light, odorless gas, which, given enough time, turns into people." -- Edward Robert Harrison
Reply
#18
(08-12-2015, 04:08 AM)stevebowers Wrote: I'm not sure why this particular neural net dreams about dogs all the time, but there is a definite dog-bias in there.

Yeah, I get pretty sick of dogs looking at that stuff too. Turns out the corpus of images they trained it on included a data set of several tens of thousands of images they had collected for an earlier system that was trained for a "classification of dog breeds" task. Where it was supposed to identify, for example, which dog in the picture was a beagle and which dog was a schnauzer.
Reply
#19
(08-12-2015, 07:41 AM)chris0033547 Wrote: Sounds interesting; In that case I hope that some of your projects include AI systems that can help humans with medical research and medical expertise. That would be awesome:

http://www-03.ibm.com/press/us/en/pressr.../47435.wss

http://now.tufts.edu/news-releases/plana...telligence

That is some cool stuff. Hmmm. I have no doubt that the things I'm working on would be applicable to medical research, but there's nothing particularly task-specific about them. They could as easily be applied to weapons research.

If they work they'll mostly make systems a lot more sensitive to context and aware of their previous interaction (capable of keeping track of the thread of the conversation/interaction, or even remembering *specific* conversations/interactions and using that directly as input later) than they are now.

Most neural networks pretty much run on reflex action, with vague hints at contextual memory being difficult to achieve because encoding short-term memory in the pattern of neural connections is both hard and vaguely defined. Nobody's figured out a way to build "access to simple computer storage" in a way that has neural-net controls which can be finetuned by backpropagation so a neural network can learn to use it effectively. And they suck at math and other such things largely for the same reason; because nobody has figured out a way for them to control the CPU's ability to easily do math and then use the results of that math to feed later levels of the network. I mean, it's easy to hook it up, but very very hard to figure out a way to get a correct error gradient / correction so that the system can learn when it's getting the use of it right and specifically what network-weight corrections it needs to make when it's getting the use of it wrong.

I think (hope) I may have a way to crack that. Allowing a network to meaningfully evaluate and draw correction gradients from whole sequences of actions and outputs rather than just one action/output at a time. If I'm right that would help a whole lot with these context-sensitive tasks.

So, yes, applicable to medical research. But someone could also use it to build this... https://www.youtube.com/watch?v=_mqDjcGgE5I
Reply


Forum Jump:


Users browsing this thread: 5 Guest(s)