Why do neural networks need so many training examples to perform?What can we learn about the human brain from...

Website seeing my Facebook data?

Shemot 30:32 anointing oil NOT to be poured on human flesh?

Possible issue with my W4 and tax return

What's the oldest plausible frozen specimen for a Jurassic Park style story-line?

Crack the bank account's password!

Plausible reason for gold-digging ant

Why does 0.-5 evaluate to -5?

Translation needed for 130 years old church document

Is there a way to store 9th-level spells in a Glyph of Warding or similar method?

Need help with a circuit diagram where the motor does not seem to have any connection to ground. Error with diagram? Or am i missing something?

Does Skippy chunky peanut butter contain trans fat?

Single-row INSERT...SELECT much slower than separate SELECT

Can we "borrow" our answers to populate our own websites?

Has any human ever had the choice to leave Earth permanently?

Reading Mishnayos without understanding

Is there a file that always exists and a 'normal' user can't lstat it?

What senses are available to a corpse subjected to a Speak with Dead spell?

Potential client has a problematic employee I can't work with

A fantasy book with seven white haired women on the cover

Closed set in topological space generated by sets of the form [a, b).

Which RAF squadrons and aircraft types took part in the bombing of Berlin on the 25th of August 1940?

Can you determine if focus is sharp without diopter adjustment if your sight is imperfect?

The No-Straight Maze

Microtypography protrusion with Polish quotation marks



Why do neural networks need so many training examples to perform?


What can we learn about the human brain from artificial neural networks?Examples of drastic improvements when using deep neural networksDoes Neural Networks based classification need a dimension reduction?Do Neural Networks need “compound” features?Can a neural network learn a functional, and its functional derivative?Why use gradient descent with neural networks?To what extent are convolutional neural networks inspired by biology?Deep networks vs shallow networks: why do we need depth?One big neural network or many small neural networks?Why do neural networks need feature selection / engineering?Deep neural networks versus tall neural networks













24












$begingroup$


A human child at age 2 needs around 5 instances of a car to be able to identify it with reasonable accuracy regardless of color, make, etc. When my son was 2, he was able to identify trams and trains, even though he had seen just a few. Since he was usually confusing one with each other, apparently his neural network was not trained enough, but still.



What is it that artificial neural networks are missing that prevent them from being able to learn way quicker? Is transfer learning an answer?










share|cite|improve this question











$endgroup$








  • 11




    $begingroup$
    Elephants might be a better example than cars. As others have noted, a child may have seen many cars before hearing the label, so if their mind already defines "natural kinds" it now has a label for one. However, a Western child indisputably develops a good elephant-classifying system on the basis of just a few data.
    $endgroup$
    – J.G.
    yesterday






  • 26




    $begingroup$
    What makes you think that a human child’s brain works like a neural network?
    $endgroup$
    – Paul Wasilewski
    yesterday






  • 3




    $begingroup$
    @PaulWasilewski: Aren't brains by definition neural networks?
    $endgroup$
    – MSalters
    14 hours ago






  • 6




    $begingroup$
    A NN can be shown an image of a car. Your child gets a full 3D movie from different perspectives, for several different types of car. Your child also likely has similar examples to distinguish a car from. For instance their baby stroller, toys, etc. Without those, I think your child would have needed more examples.
    $endgroup$
    – Stian Yttervik
    10 hours ago






  • 4




    $begingroup$
    @MSalters In the sense of an Artificial Neural Network? Probably not.
    $endgroup$
    – Firebug
    10 hours ago
















24












$begingroup$


A human child at age 2 needs around 5 instances of a car to be able to identify it with reasonable accuracy regardless of color, make, etc. When my son was 2, he was able to identify trams and trains, even though he had seen just a few. Since he was usually confusing one with each other, apparently his neural network was not trained enough, but still.



What is it that artificial neural networks are missing that prevent them from being able to learn way quicker? Is transfer learning an answer?










share|cite|improve this question











$endgroup$








  • 11




    $begingroup$
    Elephants might be a better example than cars. As others have noted, a child may have seen many cars before hearing the label, so if their mind already defines "natural kinds" it now has a label for one. However, a Western child indisputably develops a good elephant-classifying system on the basis of just a few data.
    $endgroup$
    – J.G.
    yesterday






  • 26




    $begingroup$
    What makes you think that a human child’s brain works like a neural network?
    $endgroup$
    – Paul Wasilewski
    yesterday






  • 3




    $begingroup$
    @PaulWasilewski: Aren't brains by definition neural networks?
    $endgroup$
    – MSalters
    14 hours ago






  • 6




    $begingroup$
    A NN can be shown an image of a car. Your child gets a full 3D movie from different perspectives, for several different types of car. Your child also likely has similar examples to distinguish a car from. For instance their baby stroller, toys, etc. Without those, I think your child would have needed more examples.
    $endgroup$
    – Stian Yttervik
    10 hours ago






  • 4




    $begingroup$
    @MSalters In the sense of an Artificial Neural Network? Probably not.
    $endgroup$
    – Firebug
    10 hours ago














24












24








24


10



$begingroup$


A human child at age 2 needs around 5 instances of a car to be able to identify it with reasonable accuracy regardless of color, make, etc. When my son was 2, he was able to identify trams and trains, even though he had seen just a few. Since he was usually confusing one with each other, apparently his neural network was not trained enough, but still.



What is it that artificial neural networks are missing that prevent them from being able to learn way quicker? Is transfer learning an answer?










share|cite|improve this question











$endgroup$




A human child at age 2 needs around 5 instances of a car to be able to identify it with reasonable accuracy regardless of color, make, etc. When my son was 2, he was able to identify trams and trains, even though he had seen just a few. Since he was usually confusing one with each other, apparently his neural network was not trained enough, but still.



What is it that artificial neural networks are missing that prevent them from being able to learn way quicker? Is transfer learning an answer?







neural-networks neuroscience






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited 4 mins ago









smci

87211018




87211018










asked yesterday









MarcinMarcin

309210




309210








  • 11




    $begingroup$
    Elephants might be a better example than cars. As others have noted, a child may have seen many cars before hearing the label, so if their mind already defines "natural kinds" it now has a label for one. However, a Western child indisputably develops a good elephant-classifying system on the basis of just a few data.
    $endgroup$
    – J.G.
    yesterday






  • 26




    $begingroup$
    What makes you think that a human child’s brain works like a neural network?
    $endgroup$
    – Paul Wasilewski
    yesterday






  • 3




    $begingroup$
    @PaulWasilewski: Aren't brains by definition neural networks?
    $endgroup$
    – MSalters
    14 hours ago






  • 6




    $begingroup$
    A NN can be shown an image of a car. Your child gets a full 3D movie from different perspectives, for several different types of car. Your child also likely has similar examples to distinguish a car from. For instance their baby stroller, toys, etc. Without those, I think your child would have needed more examples.
    $endgroup$
    – Stian Yttervik
    10 hours ago






  • 4




    $begingroup$
    @MSalters In the sense of an Artificial Neural Network? Probably not.
    $endgroup$
    – Firebug
    10 hours ago














  • 11




    $begingroup$
    Elephants might be a better example than cars. As others have noted, a child may have seen many cars before hearing the label, so if their mind already defines "natural kinds" it now has a label for one. However, a Western child indisputably develops a good elephant-classifying system on the basis of just a few data.
    $endgroup$
    – J.G.
    yesterday






  • 26




    $begingroup$
    What makes you think that a human child’s brain works like a neural network?
    $endgroup$
    – Paul Wasilewski
    yesterday






  • 3




    $begingroup$
    @PaulWasilewski: Aren't brains by definition neural networks?
    $endgroup$
    – MSalters
    14 hours ago






  • 6




    $begingroup$
    A NN can be shown an image of a car. Your child gets a full 3D movie from different perspectives, for several different types of car. Your child also likely has similar examples to distinguish a car from. For instance their baby stroller, toys, etc. Without those, I think your child would have needed more examples.
    $endgroup$
    – Stian Yttervik
    10 hours ago






  • 4




    $begingroup$
    @MSalters In the sense of an Artificial Neural Network? Probably not.
    $endgroup$
    – Firebug
    10 hours ago








11




11




$begingroup$
Elephants might be a better example than cars. As others have noted, a child may have seen many cars before hearing the label, so if their mind already defines "natural kinds" it now has a label for one. However, a Western child indisputably develops a good elephant-classifying system on the basis of just a few data.
$endgroup$
– J.G.
yesterday




$begingroup$
Elephants might be a better example than cars. As others have noted, a child may have seen many cars before hearing the label, so if their mind already defines "natural kinds" it now has a label for one. However, a Western child indisputably develops a good elephant-classifying system on the basis of just a few data.
$endgroup$
– J.G.
yesterday




26




26




$begingroup$
What makes you think that a human child’s brain works like a neural network?
$endgroup$
– Paul Wasilewski
yesterday




$begingroup$
What makes you think that a human child’s brain works like a neural network?
$endgroup$
– Paul Wasilewski
yesterday




3




3




$begingroup$
@PaulWasilewski: Aren't brains by definition neural networks?
$endgroup$
– MSalters
14 hours ago




$begingroup$
@PaulWasilewski: Aren't brains by definition neural networks?
$endgroup$
– MSalters
14 hours ago




6




6




$begingroup$
A NN can be shown an image of a car. Your child gets a full 3D movie from different perspectives, for several different types of car. Your child also likely has similar examples to distinguish a car from. For instance their baby stroller, toys, etc. Without those, I think your child would have needed more examples.
$endgroup$
– Stian Yttervik
10 hours ago




$begingroup$
A NN can be shown an image of a car. Your child gets a full 3D movie from different perspectives, for several different types of car. Your child also likely has similar examples to distinguish a car from. For instance their baby stroller, toys, etc. Without those, I think your child would have needed more examples.
$endgroup$
– Stian Yttervik
10 hours ago




4




4




$begingroup$
@MSalters In the sense of an Artificial Neural Network? Probably not.
$endgroup$
– Firebug
10 hours ago




$begingroup$
@MSalters In the sense of an Artificial Neural Network? Probably not.
$endgroup$
– Firebug
10 hours ago










8 Answers
8






active

oldest

votes


















42












$begingroup$

There's a kind of goalpost moving that underlies this question. It used to be that NNs weren't very good at image recognition, so no one compared them to humans. Now that NNs are good at image tasks, suddenly it's the fault of NNs that they require a lot of training data to be comparable to children.



You can also turn this logic on its head. Suppose a child sees a number of cars the day that it's born. I wouldn't expect the child to be able to pick out a car the next day, or the next week, even though it's seen so many examples. Why are newborns so slow to learn? Because it takes a lot of exposure to the real world and, and the passage of time to change the child's neural pathways. For a neural network, we call this “training data,” but for a child we call it “growing up.”






share|cite|improve this answer











$endgroup$









  • 9




    $begingroup$
    To make it a bit more specific, a human child has already had years of training with tens of thousands of example allowing them to determining how objects look when viewed from different angles, how to identify their boundaries, the relationship between apparent size and actual size, and so on.
    $endgroup$
    – David Schwartz
    20 hours ago






  • 7




    $begingroup$
    A child's brain is active inside the womb. The baby can identify their parents by sound, after the sound is filtered through water. A new-born baby had months of data to work with before they're born, but they still need years more before they can form a word, then couple more years before they can form a sentence, then couple more for a grammatically correct sentence, etc... learning is very complicated.
    $endgroup$
    – Nelson
    17 hours ago





















29












$begingroup$

First of all, at age two, child knows a lot about world and actively applies this knowledge. Child does a lot of "transfer learning" by applying this knowledge to new concepts.



Second, before seeing those five "labeled" examples of car, child sees a lot of cars on the street, on TV, toy cars etc., so also a lot of "unsupervised learning" happens beforehand.



Finally, neural networks have almost nothing in common with human brain, so there's not much point in comparing them. Also notice that there are algorithms for one shot learning, and pretty much research on it currently happens.






share|cite|improve this answer









$endgroup$





















    5












    $begingroup$

    One major aspect that I don't see in current answers is evolution.



    A child's brain does not learn from scratch. It's similar to asking how deer and giraffe babies can walk a few minutes after birth. Because they are born with their brains already wired for this task. There is some fine-tuning needed of course, but the baby deer doesn't learn to walk from "random initialization".



    Similarly, the fact that big moving objects exist and are important to keep track of is something we are born with.



    So I think the presupposition of this question is simply false. Human neural networks had the opportunity to see tons of - maybe not cars but - moving, rotating 3D objects with difficult textures and shapes etc., but this happened through lots of generations and the learning took place by evolutionary algorithms, i.e. the ones whose brain was better structured for this task, could live to reproduce with higher chance, leaving the next generation with better and better brain wiring from the start.






    share|cite|improve this answer









    $endgroup$









    • 1




      $begingroup$
      Fun aside: there's evidence that when it comes to discriminating between different models of cars, we actually leverage the specialized facial recognition center of our brain. It's plausible that, while a child may not distinguish between different models, the implicit presence of a 'face' on a mobile object may cause cars to be categorized as a type of creature and therefore be favored to be identified by evolution, since recognizing mobile objects with faces is helpful to survival.
      $endgroup$
      – Dan Bryant
      5 hours ago



















    4












    $begingroup$

    This is an a fascinating question that I've pondered over a lot also, and can come up with a few explanations why.




    • Neural networks work nothing like the brain. Backpropagation is unique to neural networks, and does not happen in the brain. In that sense, we just don't know the general learning algorithm in our brains. It could be electrical, it could be chemical, it could even be a combination of the two. Neural networks could be considered an inferior form of learning compared to our brains because of how simplified they are.

    • If neural networks are indeed like our brain, then human babies undergo extensive "training" of the early layers, like feature extraction, in their early days. So their neural networks aren't really trained from scratch, but rather the last layer is retrained to add more and more classes and labels.






    share|cite|improve this answer








    New contributor




    sd2017 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






    $endgroup$





















      4












      $begingroup$

      I don't know much about neural networks but I know a fair bit about babies.



      Many 2 year olds have a lot of issues with how general words should be. For instance, it is quite common at that age for kids to use "dog" for any four legged animal. That's a more difficult distinction than "car" - just think how different a poodle looks from a great Dane, for instance and yet they are both "dog" while a cat is not.



      And a child at 2 has seen many many more than 5 examples of "car". A kid sees dozens or even hundreds of examples of cars any time the family goes for a drive. And a lot of parents will comment "look at the car" a lot more than 5 times. But kids can also think in ways that they weren't told about. For instance, on the street the kid sees lots of things lined up. His dad says (of one) "look at the shiny car!" and the kid thinks "maybe all those other things lined up are also cars?"






      share|cite|improve this answer









      $endgroup$





















        2












        $begingroup$

        One way to train a deep neural network is to treat it as a stack of auto-encoders (Restricted Boltzmann Machines).



        In theory, an auto-encoder learns in an unsupervised manner: It takes arbitrary, unlabelled input data and processes it to generate output data. Then it takes that output data, and tries to regenerate its input data. It tweaks its nodes' parameters until it can come close to round-tripping its data. If you think about it, the auto-encoder is writing its own automated unit tests. In effect, it is turning its "unlabelled input data" into labelled data: The original data serves as a label for the round-tripped data.



        After the layers of auto-encoders are trained, the neural network is fine-tuned using labelled data to perform its intended function. In effect, these are functional tests.



        The original poster asks why a lot of data is needed to train an artificial neural network, and compares that to the allegedly low amount of training data needed by a two-year-old human. The original poster is comparing apples-to-oranges: The overall training process for the artificial neural net, versus the fine-tuning with labels for the two-year-old.



        But in reality, the two-year old has been training its auto-encoders on random, self-labelled data for more than two years. Babies dream when they are in utero. (So do kittens.) Researchers have described these dreams as involving random neuron firings in the visual processing centers.






        share|cite|improve this answer










        New contributor




        Jasper is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        $endgroup$





















          1












          $begingroup$


          A human child at age 2 needs around 5 instances of a car to be able to identify it with reasonable accuracy regardless of color, make, etc.




          The concept of "instances" gets easily muddied. While a child may have seen 5 unique instances of a car, they have actually seen thousands of thousands of frames, in many differing environments. They have likely seen cars in other contexts. They also have an intuition for the physical world developed over their lifetime - some transfer learning probably happens here. Yet we wrap all of that up into "5 instances."



          Meanwhile, every single frame/image you pass to a CNN is considered an "example." If you apply a consistent definition, both systems are really utilizing a much more similar amount of training data.



          Also, I would like to note that convolutional neural networks - CNNs - are more useful in computer vision than ANNs, and in fact approach human performance in tasks like image classification. Deep learning is (probably) not a panacea, but it does perform admirably in this domain.






          share|cite









          $endgroup$





















            0












            $begingroup$

            As pointed out by others, the data-efficiency of artificial neural networks varies quite substantially, depending on the details. As a matter of fact, there are many so called one-shot learning methods, that can solve the task of labelling trams with quite good accuracy, using only a single labelled sample.



            One way to do this is by so-called transfer learning; a network trained on other labels is usually very effectively adaptable to new labels, since the hard work is breaking down the low level components of the image in a sensible way.



            But we do not infact need such labeled data to perform such task; much like babies dont need nearly as much labeled data as the neural networs you are thinking of do.



            For instance, one such unsupervised methods that I have also successfully applied in other contexts, is to take an unlabeled set of images, randomly rotate them, and train a network to predict which side of the image is 'up'. Without knowing what the visible objects are, or what they are called, this forces the network to learn a tremendous amount of structure about the images; and this can form an excellent basis for much more data-efficient subsequent labeled learning.



            While it is true that artificial networks are quite different from real ones in probably meaningful ways, such as the absence of an obvious analogue of backpropagation, it is very probably true that real neural networks make use of the same tricks, of trying to learn the structure in the data implied by some simple priors.



            One other example which almost certainly plays a role in animals and has also shown great promise in understanding video, is in the assumption that the future should be predictable from the past. Just by starting from that assumption, you can teach a neural network a whole lot. Or on a philosophical level, I am inclined to believe that this assumption underlies almost everything what we consider to be 'knowledge'.



            I am not saying anything new here; but it is relatively new in the sense that these possibilities are too young to have found many applications yet, and do not yet have percolated down to the textbook understanding of 'what an ANN can do'. So to answer the OPs question; ANN's have already closed much of the gap that you describe.






            share|cite|improve this answer









            $endgroup$













              Your Answer





              StackExchange.ifUsing("editor", function () {
              return StackExchange.using("mathjaxEditing", function () {
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              });
              });
              }, "mathjax-editing");

              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "65"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f394118%2fwhy-do-neural-networks-need-so-many-training-examples-to-perform%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              8 Answers
              8






              active

              oldest

              votes








              8 Answers
              8






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              42












              $begingroup$

              There's a kind of goalpost moving that underlies this question. It used to be that NNs weren't very good at image recognition, so no one compared them to humans. Now that NNs are good at image tasks, suddenly it's the fault of NNs that they require a lot of training data to be comparable to children.



              You can also turn this logic on its head. Suppose a child sees a number of cars the day that it's born. I wouldn't expect the child to be able to pick out a car the next day, or the next week, even though it's seen so many examples. Why are newborns so slow to learn? Because it takes a lot of exposure to the real world and, and the passage of time to change the child's neural pathways. For a neural network, we call this “training data,” but for a child we call it “growing up.”






              share|cite|improve this answer











              $endgroup$









              • 9




                $begingroup$
                To make it a bit more specific, a human child has already had years of training with tens of thousands of example allowing them to determining how objects look when viewed from different angles, how to identify their boundaries, the relationship between apparent size and actual size, and so on.
                $endgroup$
                – David Schwartz
                20 hours ago






              • 7




                $begingroup$
                A child's brain is active inside the womb. The baby can identify their parents by sound, after the sound is filtered through water. A new-born baby had months of data to work with before they're born, but they still need years more before they can form a word, then couple more years before they can form a sentence, then couple more for a grammatically correct sentence, etc... learning is very complicated.
                $endgroup$
                – Nelson
                17 hours ago


















              42












              $begingroup$

              There's a kind of goalpost moving that underlies this question. It used to be that NNs weren't very good at image recognition, so no one compared them to humans. Now that NNs are good at image tasks, suddenly it's the fault of NNs that they require a lot of training data to be comparable to children.



              You can also turn this logic on its head. Suppose a child sees a number of cars the day that it's born. I wouldn't expect the child to be able to pick out a car the next day, or the next week, even though it's seen so many examples. Why are newborns so slow to learn? Because it takes a lot of exposure to the real world and, and the passage of time to change the child's neural pathways. For a neural network, we call this “training data,” but for a child we call it “growing up.”






              share|cite|improve this answer











              $endgroup$









              • 9




                $begingroup$
                To make it a bit more specific, a human child has already had years of training with tens of thousands of example allowing them to determining how objects look when viewed from different angles, how to identify their boundaries, the relationship between apparent size and actual size, and so on.
                $endgroup$
                – David Schwartz
                20 hours ago






              • 7




                $begingroup$
                A child's brain is active inside the womb. The baby can identify their parents by sound, after the sound is filtered through water. A new-born baby had months of data to work with before they're born, but they still need years more before they can form a word, then couple more years before they can form a sentence, then couple more for a grammatically correct sentence, etc... learning is very complicated.
                $endgroup$
                – Nelson
                17 hours ago
















              42












              42








              42





              $begingroup$

              There's a kind of goalpost moving that underlies this question. It used to be that NNs weren't very good at image recognition, so no one compared them to humans. Now that NNs are good at image tasks, suddenly it's the fault of NNs that they require a lot of training data to be comparable to children.



              You can also turn this logic on its head. Suppose a child sees a number of cars the day that it's born. I wouldn't expect the child to be able to pick out a car the next day, or the next week, even though it's seen so many examples. Why are newborns so slow to learn? Because it takes a lot of exposure to the real world and, and the passage of time to change the child's neural pathways. For a neural network, we call this “training data,” but for a child we call it “growing up.”






              share|cite|improve this answer











              $endgroup$



              There's a kind of goalpost moving that underlies this question. It used to be that NNs weren't very good at image recognition, so no one compared them to humans. Now that NNs are good at image tasks, suddenly it's the fault of NNs that they require a lot of training data to be comparable to children.



              You can also turn this logic on its head. Suppose a child sees a number of cars the day that it's born. I wouldn't expect the child to be able to pick out a car the next day, or the next week, even though it's seen so many examples. Why are newborns so slow to learn? Because it takes a lot of exposure to the real world and, and the passage of time to change the child's neural pathways. For a neural network, we call this “training data,” but for a child we call it “growing up.”







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited yesterday

























              answered yesterday









              SycoraxSycorax

              40.6k12104204




              40.6k12104204








              • 9




                $begingroup$
                To make it a bit more specific, a human child has already had years of training with tens of thousands of example allowing them to determining how objects look when viewed from different angles, how to identify their boundaries, the relationship between apparent size and actual size, and so on.
                $endgroup$
                – David Schwartz
                20 hours ago






              • 7




                $begingroup$
                A child's brain is active inside the womb. The baby can identify their parents by sound, after the sound is filtered through water. A new-born baby had months of data to work with before they're born, but they still need years more before they can form a word, then couple more years before they can form a sentence, then couple more for a grammatically correct sentence, etc... learning is very complicated.
                $endgroup$
                – Nelson
                17 hours ago
















              • 9




                $begingroup$
                To make it a bit more specific, a human child has already had years of training with tens of thousands of example allowing them to determining how objects look when viewed from different angles, how to identify their boundaries, the relationship between apparent size and actual size, and so on.
                $endgroup$
                – David Schwartz
                20 hours ago






              • 7




                $begingroup$
                A child's brain is active inside the womb. The baby can identify their parents by sound, after the sound is filtered through water. A new-born baby had months of data to work with before they're born, but they still need years more before they can form a word, then couple more years before they can form a sentence, then couple more for a grammatically correct sentence, etc... learning is very complicated.
                $endgroup$
                – Nelson
                17 hours ago










              9




              9




              $begingroup$
              To make it a bit more specific, a human child has already had years of training with tens of thousands of example allowing them to determining how objects look when viewed from different angles, how to identify their boundaries, the relationship between apparent size and actual size, and so on.
              $endgroup$
              – David Schwartz
              20 hours ago




              $begingroup$
              To make it a bit more specific, a human child has already had years of training with tens of thousands of example allowing them to determining how objects look when viewed from different angles, how to identify their boundaries, the relationship between apparent size and actual size, and so on.
              $endgroup$
              – David Schwartz
              20 hours ago




              7




              7




              $begingroup$
              A child's brain is active inside the womb. The baby can identify their parents by sound, after the sound is filtered through water. A new-born baby had months of data to work with before they're born, but they still need years more before they can form a word, then couple more years before they can form a sentence, then couple more for a grammatically correct sentence, etc... learning is very complicated.
              $endgroup$
              – Nelson
              17 hours ago






              $begingroup$
              A child's brain is active inside the womb. The baby can identify their parents by sound, after the sound is filtered through water. A new-born baby had months of data to work with before they're born, but they still need years more before they can form a word, then couple more years before they can form a sentence, then couple more for a grammatically correct sentence, etc... learning is very complicated.
              $endgroup$
              – Nelson
              17 hours ago















              29












              $begingroup$

              First of all, at age two, child knows a lot about world and actively applies this knowledge. Child does a lot of "transfer learning" by applying this knowledge to new concepts.



              Second, before seeing those five "labeled" examples of car, child sees a lot of cars on the street, on TV, toy cars etc., so also a lot of "unsupervised learning" happens beforehand.



              Finally, neural networks have almost nothing in common with human brain, so there's not much point in comparing them. Also notice that there are algorithms for one shot learning, and pretty much research on it currently happens.






              share|cite|improve this answer









              $endgroup$


















                29












                $begingroup$

                First of all, at age two, child knows a lot about world and actively applies this knowledge. Child does a lot of "transfer learning" by applying this knowledge to new concepts.



                Second, before seeing those five "labeled" examples of car, child sees a lot of cars on the street, on TV, toy cars etc., so also a lot of "unsupervised learning" happens beforehand.



                Finally, neural networks have almost nothing in common with human brain, so there's not much point in comparing them. Also notice that there are algorithms for one shot learning, and pretty much research on it currently happens.






                share|cite|improve this answer









                $endgroup$
















                  29












                  29








                  29





                  $begingroup$

                  First of all, at age two, child knows a lot about world and actively applies this knowledge. Child does a lot of "transfer learning" by applying this knowledge to new concepts.



                  Second, before seeing those five "labeled" examples of car, child sees a lot of cars on the street, on TV, toy cars etc., so also a lot of "unsupervised learning" happens beforehand.



                  Finally, neural networks have almost nothing in common with human brain, so there's not much point in comparing them. Also notice that there are algorithms for one shot learning, and pretty much research on it currently happens.






                  share|cite|improve this answer









                  $endgroup$



                  First of all, at age two, child knows a lot about world and actively applies this knowledge. Child does a lot of "transfer learning" by applying this knowledge to new concepts.



                  Second, before seeing those five "labeled" examples of car, child sees a lot of cars on the street, on TV, toy cars etc., so also a lot of "unsupervised learning" happens beforehand.



                  Finally, neural networks have almost nothing in common with human brain, so there's not much point in comparing them. Also notice that there are algorithms for one shot learning, and pretty much research on it currently happens.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered yesterday









                  TimTim

                  58.2k9128220




                  58.2k9128220























                      5












                      $begingroup$

                      One major aspect that I don't see in current answers is evolution.



                      A child's brain does not learn from scratch. It's similar to asking how deer and giraffe babies can walk a few minutes after birth. Because they are born with their brains already wired for this task. There is some fine-tuning needed of course, but the baby deer doesn't learn to walk from "random initialization".



                      Similarly, the fact that big moving objects exist and are important to keep track of is something we are born with.



                      So I think the presupposition of this question is simply false. Human neural networks had the opportunity to see tons of - maybe not cars but - moving, rotating 3D objects with difficult textures and shapes etc., but this happened through lots of generations and the learning took place by evolutionary algorithms, i.e. the ones whose brain was better structured for this task, could live to reproduce with higher chance, leaving the next generation with better and better brain wiring from the start.






                      share|cite|improve this answer









                      $endgroup$









                      • 1




                        $begingroup$
                        Fun aside: there's evidence that when it comes to discriminating between different models of cars, we actually leverage the specialized facial recognition center of our brain. It's plausible that, while a child may not distinguish between different models, the implicit presence of a 'face' on a mobile object may cause cars to be categorized as a type of creature and therefore be favored to be identified by evolution, since recognizing mobile objects with faces is helpful to survival.
                        $endgroup$
                        – Dan Bryant
                        5 hours ago
















                      5












                      $begingroup$

                      One major aspect that I don't see in current answers is evolution.



                      A child's brain does not learn from scratch. It's similar to asking how deer and giraffe babies can walk a few minutes after birth. Because they are born with their brains already wired for this task. There is some fine-tuning needed of course, but the baby deer doesn't learn to walk from "random initialization".



                      Similarly, the fact that big moving objects exist and are important to keep track of is something we are born with.



                      So I think the presupposition of this question is simply false. Human neural networks had the opportunity to see tons of - maybe not cars but - moving, rotating 3D objects with difficult textures and shapes etc., but this happened through lots of generations and the learning took place by evolutionary algorithms, i.e. the ones whose brain was better structured for this task, could live to reproduce with higher chance, leaving the next generation with better and better brain wiring from the start.






                      share|cite|improve this answer









                      $endgroup$









                      • 1




                        $begingroup$
                        Fun aside: there's evidence that when it comes to discriminating between different models of cars, we actually leverage the specialized facial recognition center of our brain. It's plausible that, while a child may not distinguish between different models, the implicit presence of a 'face' on a mobile object may cause cars to be categorized as a type of creature and therefore be favored to be identified by evolution, since recognizing mobile objects with faces is helpful to survival.
                        $endgroup$
                        – Dan Bryant
                        5 hours ago














                      5












                      5








                      5





                      $begingroup$

                      One major aspect that I don't see in current answers is evolution.



                      A child's brain does not learn from scratch. It's similar to asking how deer and giraffe babies can walk a few minutes after birth. Because they are born with their brains already wired for this task. There is some fine-tuning needed of course, but the baby deer doesn't learn to walk from "random initialization".



                      Similarly, the fact that big moving objects exist and are important to keep track of is something we are born with.



                      So I think the presupposition of this question is simply false. Human neural networks had the opportunity to see tons of - maybe not cars but - moving, rotating 3D objects with difficult textures and shapes etc., but this happened through lots of generations and the learning took place by evolutionary algorithms, i.e. the ones whose brain was better structured for this task, could live to reproduce with higher chance, leaving the next generation with better and better brain wiring from the start.






                      share|cite|improve this answer









                      $endgroup$



                      One major aspect that I don't see in current answers is evolution.



                      A child's brain does not learn from scratch. It's similar to asking how deer and giraffe babies can walk a few minutes after birth. Because they are born with their brains already wired for this task. There is some fine-tuning needed of course, but the baby deer doesn't learn to walk from "random initialization".



                      Similarly, the fact that big moving objects exist and are important to keep track of is something we are born with.



                      So I think the presupposition of this question is simply false. Human neural networks had the opportunity to see tons of - maybe not cars but - moving, rotating 3D objects with difficult textures and shapes etc., but this happened through lots of generations and the learning took place by evolutionary algorithms, i.e. the ones whose brain was better structured for this task, could live to reproduce with higher chance, leaving the next generation with better and better brain wiring from the start.







                      share|cite|improve this answer












                      share|cite|improve this answer



                      share|cite|improve this answer










                      answered 9 hours ago









                      isarandiisarandi

                      22417




                      22417








                      • 1




                        $begingroup$
                        Fun aside: there's evidence that when it comes to discriminating between different models of cars, we actually leverage the specialized facial recognition center of our brain. It's plausible that, while a child may not distinguish between different models, the implicit presence of a 'face' on a mobile object may cause cars to be categorized as a type of creature and therefore be favored to be identified by evolution, since recognizing mobile objects with faces is helpful to survival.
                        $endgroup$
                        – Dan Bryant
                        5 hours ago














                      • 1




                        $begingroup$
                        Fun aside: there's evidence that when it comes to discriminating between different models of cars, we actually leverage the specialized facial recognition center of our brain. It's plausible that, while a child may not distinguish between different models, the implicit presence of a 'face' on a mobile object may cause cars to be categorized as a type of creature and therefore be favored to be identified by evolution, since recognizing mobile objects with faces is helpful to survival.
                        $endgroup$
                        – Dan Bryant
                        5 hours ago








                      1




                      1




                      $begingroup$
                      Fun aside: there's evidence that when it comes to discriminating between different models of cars, we actually leverage the specialized facial recognition center of our brain. It's plausible that, while a child may not distinguish between different models, the implicit presence of a 'face' on a mobile object may cause cars to be categorized as a type of creature and therefore be favored to be identified by evolution, since recognizing mobile objects with faces is helpful to survival.
                      $endgroup$
                      – Dan Bryant
                      5 hours ago




                      $begingroup$
                      Fun aside: there's evidence that when it comes to discriminating between different models of cars, we actually leverage the specialized facial recognition center of our brain. It's plausible that, while a child may not distinguish between different models, the implicit presence of a 'face' on a mobile object may cause cars to be categorized as a type of creature and therefore be favored to be identified by evolution, since recognizing mobile objects with faces is helpful to survival.
                      $endgroup$
                      – Dan Bryant
                      5 hours ago











                      4












                      $begingroup$

                      This is an a fascinating question that I've pondered over a lot also, and can come up with a few explanations why.




                      • Neural networks work nothing like the brain. Backpropagation is unique to neural networks, and does not happen in the brain. In that sense, we just don't know the general learning algorithm in our brains. It could be electrical, it could be chemical, it could even be a combination of the two. Neural networks could be considered an inferior form of learning compared to our brains because of how simplified they are.

                      • If neural networks are indeed like our brain, then human babies undergo extensive "training" of the early layers, like feature extraction, in their early days. So their neural networks aren't really trained from scratch, but rather the last layer is retrained to add more and more classes and labels.






                      share|cite|improve this answer








                      New contributor




                      sd2017 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.






                      $endgroup$


















                        4












                        $begingroup$

                        This is an a fascinating question that I've pondered over a lot also, and can come up with a few explanations why.




                        • Neural networks work nothing like the brain. Backpropagation is unique to neural networks, and does not happen in the brain. In that sense, we just don't know the general learning algorithm in our brains. It could be electrical, it could be chemical, it could even be a combination of the two. Neural networks could be considered an inferior form of learning compared to our brains because of how simplified they are.

                        • If neural networks are indeed like our brain, then human babies undergo extensive "training" of the early layers, like feature extraction, in their early days. So their neural networks aren't really trained from scratch, but rather the last layer is retrained to add more and more classes and labels.






                        share|cite|improve this answer








                        New contributor




                        sd2017 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.






                        $endgroup$
















                          4












                          4








                          4





                          $begingroup$

                          This is an a fascinating question that I've pondered over a lot also, and can come up with a few explanations why.




                          • Neural networks work nothing like the brain. Backpropagation is unique to neural networks, and does not happen in the brain. In that sense, we just don't know the general learning algorithm in our brains. It could be electrical, it could be chemical, it could even be a combination of the two. Neural networks could be considered an inferior form of learning compared to our brains because of how simplified they are.

                          • If neural networks are indeed like our brain, then human babies undergo extensive "training" of the early layers, like feature extraction, in their early days. So their neural networks aren't really trained from scratch, but rather the last layer is retrained to add more and more classes and labels.






                          share|cite|improve this answer








                          New contributor




                          sd2017 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.






                          $endgroup$



                          This is an a fascinating question that I've pondered over a lot also, and can come up with a few explanations why.




                          • Neural networks work nothing like the brain. Backpropagation is unique to neural networks, and does not happen in the brain. In that sense, we just don't know the general learning algorithm in our brains. It could be electrical, it could be chemical, it could even be a combination of the two. Neural networks could be considered an inferior form of learning compared to our brains because of how simplified they are.

                          • If neural networks are indeed like our brain, then human babies undergo extensive "training" of the early layers, like feature extraction, in their early days. So their neural networks aren't really trained from scratch, but rather the last layer is retrained to add more and more classes and labels.







                          share|cite|improve this answer








                          New contributor




                          sd2017 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.









                          share|cite|improve this answer



                          share|cite|improve this answer






                          New contributor




                          sd2017 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.









                          answered 18 hours ago









                          sd2017sd2017

                          411




                          411




                          New contributor




                          sd2017 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.





                          New contributor





                          sd2017 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.






                          sd2017 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.























                              4












                              $begingroup$

                              I don't know much about neural networks but I know a fair bit about babies.



                              Many 2 year olds have a lot of issues with how general words should be. For instance, it is quite common at that age for kids to use "dog" for any four legged animal. That's a more difficult distinction than "car" - just think how different a poodle looks from a great Dane, for instance and yet they are both "dog" while a cat is not.



                              And a child at 2 has seen many many more than 5 examples of "car". A kid sees dozens or even hundreds of examples of cars any time the family goes for a drive. And a lot of parents will comment "look at the car" a lot more than 5 times. But kids can also think in ways that they weren't told about. For instance, on the street the kid sees lots of things lined up. His dad says (of one) "look at the shiny car!" and the kid thinks "maybe all those other things lined up are also cars?"






                              share|cite|improve this answer









                              $endgroup$


















                                4












                                $begingroup$

                                I don't know much about neural networks but I know a fair bit about babies.



                                Many 2 year olds have a lot of issues with how general words should be. For instance, it is quite common at that age for kids to use "dog" for any four legged animal. That's a more difficult distinction than "car" - just think how different a poodle looks from a great Dane, for instance and yet they are both "dog" while a cat is not.



                                And a child at 2 has seen many many more than 5 examples of "car". A kid sees dozens or even hundreds of examples of cars any time the family goes for a drive. And a lot of parents will comment "look at the car" a lot more than 5 times. But kids can also think in ways that they weren't told about. For instance, on the street the kid sees lots of things lined up. His dad says (of one) "look at the shiny car!" and the kid thinks "maybe all those other things lined up are also cars?"






                                share|cite|improve this answer









                                $endgroup$
















                                  4












                                  4








                                  4





                                  $begingroup$

                                  I don't know much about neural networks but I know a fair bit about babies.



                                  Many 2 year olds have a lot of issues with how general words should be. For instance, it is quite common at that age for kids to use "dog" for any four legged animal. That's a more difficult distinction than "car" - just think how different a poodle looks from a great Dane, for instance and yet they are both "dog" while a cat is not.



                                  And a child at 2 has seen many many more than 5 examples of "car". A kid sees dozens or even hundreds of examples of cars any time the family goes for a drive. And a lot of parents will comment "look at the car" a lot more than 5 times. But kids can also think in ways that they weren't told about. For instance, on the street the kid sees lots of things lined up. His dad says (of one) "look at the shiny car!" and the kid thinks "maybe all those other things lined up are also cars?"






                                  share|cite|improve this answer









                                  $endgroup$



                                  I don't know much about neural networks but I know a fair bit about babies.



                                  Many 2 year olds have a lot of issues with how general words should be. For instance, it is quite common at that age for kids to use "dog" for any four legged animal. That's a more difficult distinction than "car" - just think how different a poodle looks from a great Dane, for instance and yet they are both "dog" while a cat is not.



                                  And a child at 2 has seen many many more than 5 examples of "car". A kid sees dozens or even hundreds of examples of cars any time the family goes for a drive. And a lot of parents will comment "look at the car" a lot more than 5 times. But kids can also think in ways that they weren't told about. For instance, on the street the kid sees lots of things lined up. His dad says (of one) "look at the shiny car!" and the kid thinks "maybe all those other things lined up are also cars?"







                                  share|cite|improve this answer












                                  share|cite|improve this answer



                                  share|cite|improve this answer










                                  answered 10 hours ago









                                  Peter FlomPeter Flom

                                  75.6k11107208




                                  75.6k11107208























                                      2












                                      $begingroup$

                                      One way to train a deep neural network is to treat it as a stack of auto-encoders (Restricted Boltzmann Machines).



                                      In theory, an auto-encoder learns in an unsupervised manner: It takes arbitrary, unlabelled input data and processes it to generate output data. Then it takes that output data, and tries to regenerate its input data. It tweaks its nodes' parameters until it can come close to round-tripping its data. If you think about it, the auto-encoder is writing its own automated unit tests. In effect, it is turning its "unlabelled input data" into labelled data: The original data serves as a label for the round-tripped data.



                                      After the layers of auto-encoders are trained, the neural network is fine-tuned using labelled data to perform its intended function. In effect, these are functional tests.



                                      The original poster asks why a lot of data is needed to train an artificial neural network, and compares that to the allegedly low amount of training data needed by a two-year-old human. The original poster is comparing apples-to-oranges: The overall training process for the artificial neural net, versus the fine-tuning with labels for the two-year-old.



                                      But in reality, the two-year old has been training its auto-encoders on random, self-labelled data for more than two years. Babies dream when they are in utero. (So do kittens.) Researchers have described these dreams as involving random neuron firings in the visual processing centers.






                                      share|cite|improve this answer










                                      New contributor




                                      Jasper is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                      Check out our Code of Conduct.






                                      $endgroup$


















                                        2












                                        $begingroup$

                                        One way to train a deep neural network is to treat it as a stack of auto-encoders (Restricted Boltzmann Machines).



                                        In theory, an auto-encoder learns in an unsupervised manner: It takes arbitrary, unlabelled input data and processes it to generate output data. Then it takes that output data, and tries to regenerate its input data. It tweaks its nodes' parameters until it can come close to round-tripping its data. If you think about it, the auto-encoder is writing its own automated unit tests. In effect, it is turning its "unlabelled input data" into labelled data: The original data serves as a label for the round-tripped data.



                                        After the layers of auto-encoders are trained, the neural network is fine-tuned using labelled data to perform its intended function. In effect, these are functional tests.



                                        The original poster asks why a lot of data is needed to train an artificial neural network, and compares that to the allegedly low amount of training data needed by a two-year-old human. The original poster is comparing apples-to-oranges: The overall training process for the artificial neural net, versus the fine-tuning with labels for the two-year-old.



                                        But in reality, the two-year old has been training its auto-encoders on random, self-labelled data for more than two years. Babies dream when they are in utero. (So do kittens.) Researchers have described these dreams as involving random neuron firings in the visual processing centers.






                                        share|cite|improve this answer










                                        New contributor




                                        Jasper is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                        Check out our Code of Conduct.






                                        $endgroup$
















                                          2












                                          2








                                          2





                                          $begingroup$

                                          One way to train a deep neural network is to treat it as a stack of auto-encoders (Restricted Boltzmann Machines).



                                          In theory, an auto-encoder learns in an unsupervised manner: It takes arbitrary, unlabelled input data and processes it to generate output data. Then it takes that output data, and tries to regenerate its input data. It tweaks its nodes' parameters until it can come close to round-tripping its data. If you think about it, the auto-encoder is writing its own automated unit tests. In effect, it is turning its "unlabelled input data" into labelled data: The original data serves as a label for the round-tripped data.



                                          After the layers of auto-encoders are trained, the neural network is fine-tuned using labelled data to perform its intended function. In effect, these are functional tests.



                                          The original poster asks why a lot of data is needed to train an artificial neural network, and compares that to the allegedly low amount of training data needed by a two-year-old human. The original poster is comparing apples-to-oranges: The overall training process for the artificial neural net, versus the fine-tuning with labels for the two-year-old.



                                          But in reality, the two-year old has been training its auto-encoders on random, self-labelled data for more than two years. Babies dream when they are in utero. (So do kittens.) Researchers have described these dreams as involving random neuron firings in the visual processing centers.






                                          share|cite|improve this answer










                                          New contributor




                                          Jasper is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                          Check out our Code of Conduct.






                                          $endgroup$



                                          One way to train a deep neural network is to treat it as a stack of auto-encoders (Restricted Boltzmann Machines).



                                          In theory, an auto-encoder learns in an unsupervised manner: It takes arbitrary, unlabelled input data and processes it to generate output data. Then it takes that output data, and tries to regenerate its input data. It tweaks its nodes' parameters until it can come close to round-tripping its data. If you think about it, the auto-encoder is writing its own automated unit tests. In effect, it is turning its "unlabelled input data" into labelled data: The original data serves as a label for the round-tripped data.



                                          After the layers of auto-encoders are trained, the neural network is fine-tuned using labelled data to perform its intended function. In effect, these are functional tests.



                                          The original poster asks why a lot of data is needed to train an artificial neural network, and compares that to the allegedly low amount of training data needed by a two-year-old human. The original poster is comparing apples-to-oranges: The overall training process for the artificial neural net, versus the fine-tuning with labels for the two-year-old.



                                          But in reality, the two-year old has been training its auto-encoders on random, self-labelled data for more than two years. Babies dream when they are in utero. (So do kittens.) Researchers have described these dreams as involving random neuron firings in the visual processing centers.







                                          share|cite|improve this answer










                                          New contributor




                                          Jasper is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                          Check out our Code of Conduct.









                                          share|cite|improve this answer



                                          share|cite|improve this answer








                                          edited yesterday





















                                          New contributor




                                          Jasper is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                          Check out our Code of Conduct.









                                          answered yesterday









                                          JasperJasper

                                          1214




                                          1214




                                          New contributor




                                          Jasper is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                          Check out our Code of Conduct.





                                          New contributor





                                          Jasper is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                          Check out our Code of Conduct.






                                          Jasper is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                          Check out our Code of Conduct.























                                              1












                                              $begingroup$


                                              A human child at age 2 needs around 5 instances of a car to be able to identify it with reasonable accuracy regardless of color, make, etc.




                                              The concept of "instances" gets easily muddied. While a child may have seen 5 unique instances of a car, they have actually seen thousands of thousands of frames, in many differing environments. They have likely seen cars in other contexts. They also have an intuition for the physical world developed over their lifetime - some transfer learning probably happens here. Yet we wrap all of that up into "5 instances."



                                              Meanwhile, every single frame/image you pass to a CNN is considered an "example." If you apply a consistent definition, both systems are really utilizing a much more similar amount of training data.



                                              Also, I would like to note that convolutional neural networks - CNNs - are more useful in computer vision than ANNs, and in fact approach human performance in tasks like image classification. Deep learning is (probably) not a panacea, but it does perform admirably in this domain.






                                              share|cite









                                              $endgroup$


















                                                1












                                                $begingroup$


                                                A human child at age 2 needs around 5 instances of a car to be able to identify it with reasonable accuracy regardless of color, make, etc.




                                                The concept of "instances" gets easily muddied. While a child may have seen 5 unique instances of a car, they have actually seen thousands of thousands of frames, in many differing environments. They have likely seen cars in other contexts. They also have an intuition for the physical world developed over their lifetime - some transfer learning probably happens here. Yet we wrap all of that up into "5 instances."



                                                Meanwhile, every single frame/image you pass to a CNN is considered an "example." If you apply a consistent definition, both systems are really utilizing a much more similar amount of training data.



                                                Also, I would like to note that convolutional neural networks - CNNs - are more useful in computer vision than ANNs, and in fact approach human performance in tasks like image classification. Deep learning is (probably) not a panacea, but it does perform admirably in this domain.






                                                share|cite









                                                $endgroup$
















                                                  1












                                                  1








                                                  1





                                                  $begingroup$


                                                  A human child at age 2 needs around 5 instances of a car to be able to identify it with reasonable accuracy regardless of color, make, etc.




                                                  The concept of "instances" gets easily muddied. While a child may have seen 5 unique instances of a car, they have actually seen thousands of thousands of frames, in many differing environments. They have likely seen cars in other contexts. They also have an intuition for the physical world developed over their lifetime - some transfer learning probably happens here. Yet we wrap all of that up into "5 instances."



                                                  Meanwhile, every single frame/image you pass to a CNN is considered an "example." If you apply a consistent definition, both systems are really utilizing a much more similar amount of training data.



                                                  Also, I would like to note that convolutional neural networks - CNNs - are more useful in computer vision than ANNs, and in fact approach human performance in tasks like image classification. Deep learning is (probably) not a panacea, but it does perform admirably in this domain.






                                                  share|cite









                                                  $endgroup$




                                                  A human child at age 2 needs around 5 instances of a car to be able to identify it with reasonable accuracy regardless of color, make, etc.




                                                  The concept of "instances" gets easily muddied. While a child may have seen 5 unique instances of a car, they have actually seen thousands of thousands of frames, in many differing environments. They have likely seen cars in other contexts. They also have an intuition for the physical world developed over their lifetime - some transfer learning probably happens here. Yet we wrap all of that up into "5 instances."



                                                  Meanwhile, every single frame/image you pass to a CNN is considered an "example." If you apply a consistent definition, both systems are really utilizing a much more similar amount of training data.



                                                  Also, I would like to note that convolutional neural networks - CNNs - are more useful in computer vision than ANNs, and in fact approach human performance in tasks like image classification. Deep learning is (probably) not a panacea, but it does perform admirably in this domain.







                                                  share|cite












                                                  share|cite



                                                  share|cite










                                                  answered 3 mins ago









                                                  spinodalspinodal

                                                  1185




                                                  1185























                                                      0












                                                      $begingroup$

                                                      As pointed out by others, the data-efficiency of artificial neural networks varies quite substantially, depending on the details. As a matter of fact, there are many so called one-shot learning methods, that can solve the task of labelling trams with quite good accuracy, using only a single labelled sample.



                                                      One way to do this is by so-called transfer learning; a network trained on other labels is usually very effectively adaptable to new labels, since the hard work is breaking down the low level components of the image in a sensible way.



                                                      But we do not infact need such labeled data to perform such task; much like babies dont need nearly as much labeled data as the neural networs you are thinking of do.



                                                      For instance, one such unsupervised methods that I have also successfully applied in other contexts, is to take an unlabeled set of images, randomly rotate them, and train a network to predict which side of the image is 'up'. Without knowing what the visible objects are, or what they are called, this forces the network to learn a tremendous amount of structure about the images; and this can form an excellent basis for much more data-efficient subsequent labeled learning.



                                                      While it is true that artificial networks are quite different from real ones in probably meaningful ways, such as the absence of an obvious analogue of backpropagation, it is very probably true that real neural networks make use of the same tricks, of trying to learn the structure in the data implied by some simple priors.



                                                      One other example which almost certainly plays a role in animals and has also shown great promise in understanding video, is in the assumption that the future should be predictable from the past. Just by starting from that assumption, you can teach a neural network a whole lot. Or on a philosophical level, I am inclined to believe that this assumption underlies almost everything what we consider to be 'knowledge'.



                                                      I am not saying anything new here; but it is relatively new in the sense that these possibilities are too young to have found many applications yet, and do not yet have percolated down to the textbook understanding of 'what an ANN can do'. So to answer the OPs question; ANN's have already closed much of the gap that you describe.






                                                      share|cite|improve this answer









                                                      $endgroup$


















                                                        0












                                                        $begingroup$

                                                        As pointed out by others, the data-efficiency of artificial neural networks varies quite substantially, depending on the details. As a matter of fact, there are many so called one-shot learning methods, that can solve the task of labelling trams with quite good accuracy, using only a single labelled sample.



                                                        One way to do this is by so-called transfer learning; a network trained on other labels is usually very effectively adaptable to new labels, since the hard work is breaking down the low level components of the image in a sensible way.



                                                        But we do not infact need such labeled data to perform such task; much like babies dont need nearly as much labeled data as the neural networs you are thinking of do.



                                                        For instance, one such unsupervised methods that I have also successfully applied in other contexts, is to take an unlabeled set of images, randomly rotate them, and train a network to predict which side of the image is 'up'. Without knowing what the visible objects are, or what they are called, this forces the network to learn a tremendous amount of structure about the images; and this can form an excellent basis for much more data-efficient subsequent labeled learning.



                                                        While it is true that artificial networks are quite different from real ones in probably meaningful ways, such as the absence of an obvious analogue of backpropagation, it is very probably true that real neural networks make use of the same tricks, of trying to learn the structure in the data implied by some simple priors.



                                                        One other example which almost certainly plays a role in animals and has also shown great promise in understanding video, is in the assumption that the future should be predictable from the past. Just by starting from that assumption, you can teach a neural network a whole lot. Or on a philosophical level, I am inclined to believe that this assumption underlies almost everything what we consider to be 'knowledge'.



                                                        I am not saying anything new here; but it is relatively new in the sense that these possibilities are too young to have found many applications yet, and do not yet have percolated down to the textbook understanding of 'what an ANN can do'. So to answer the OPs question; ANN's have already closed much of the gap that you describe.






                                                        share|cite|improve this answer









                                                        $endgroup$
















                                                          0












                                                          0








                                                          0





                                                          $begingroup$

                                                          As pointed out by others, the data-efficiency of artificial neural networks varies quite substantially, depending on the details. As a matter of fact, there are many so called one-shot learning methods, that can solve the task of labelling trams with quite good accuracy, using only a single labelled sample.



                                                          One way to do this is by so-called transfer learning; a network trained on other labels is usually very effectively adaptable to new labels, since the hard work is breaking down the low level components of the image in a sensible way.



                                                          But we do not infact need such labeled data to perform such task; much like babies dont need nearly as much labeled data as the neural networs you are thinking of do.



                                                          For instance, one such unsupervised methods that I have also successfully applied in other contexts, is to take an unlabeled set of images, randomly rotate them, and train a network to predict which side of the image is 'up'. Without knowing what the visible objects are, or what they are called, this forces the network to learn a tremendous amount of structure about the images; and this can form an excellent basis for much more data-efficient subsequent labeled learning.



                                                          While it is true that artificial networks are quite different from real ones in probably meaningful ways, such as the absence of an obvious analogue of backpropagation, it is very probably true that real neural networks make use of the same tricks, of trying to learn the structure in the data implied by some simple priors.



                                                          One other example which almost certainly plays a role in animals and has also shown great promise in understanding video, is in the assumption that the future should be predictable from the past. Just by starting from that assumption, you can teach a neural network a whole lot. Or on a philosophical level, I am inclined to believe that this assumption underlies almost everything what we consider to be 'knowledge'.



                                                          I am not saying anything new here; but it is relatively new in the sense that these possibilities are too young to have found many applications yet, and do not yet have percolated down to the textbook understanding of 'what an ANN can do'. So to answer the OPs question; ANN's have already closed much of the gap that you describe.






                                                          share|cite|improve this answer









                                                          $endgroup$



                                                          As pointed out by others, the data-efficiency of artificial neural networks varies quite substantially, depending on the details. As a matter of fact, there are many so called one-shot learning methods, that can solve the task of labelling trams with quite good accuracy, using only a single labelled sample.



                                                          One way to do this is by so-called transfer learning; a network trained on other labels is usually very effectively adaptable to new labels, since the hard work is breaking down the low level components of the image in a sensible way.



                                                          But we do not infact need such labeled data to perform such task; much like babies dont need nearly as much labeled data as the neural networs you are thinking of do.



                                                          For instance, one such unsupervised methods that I have also successfully applied in other contexts, is to take an unlabeled set of images, randomly rotate them, and train a network to predict which side of the image is 'up'. Without knowing what the visible objects are, or what they are called, this forces the network to learn a tremendous amount of structure about the images; and this can form an excellent basis for much more data-efficient subsequent labeled learning.



                                                          While it is true that artificial networks are quite different from real ones in probably meaningful ways, such as the absence of an obvious analogue of backpropagation, it is very probably true that real neural networks make use of the same tricks, of trying to learn the structure in the data implied by some simple priors.



                                                          One other example which almost certainly plays a role in animals and has also shown great promise in understanding video, is in the assumption that the future should be predictable from the past. Just by starting from that assumption, you can teach a neural network a whole lot. Or on a philosophical level, I am inclined to believe that this assumption underlies almost everything what we consider to be 'knowledge'.



                                                          I am not saying anything new here; but it is relatively new in the sense that these possibilities are too young to have found many applications yet, and do not yet have percolated down to the textbook understanding of 'what an ANN can do'. So to answer the OPs question; ANN's have already closed much of the gap that you describe.







                                                          share|cite|improve this answer












                                                          share|cite|improve this answer



                                                          share|cite|improve this answer










                                                          answered 12 hours ago









                                                          Eelco HoogendoornEelco Hoogendoorn

                                                          454




                                                          454






























                                                              draft saved

                                                              draft discarded




















































                                                              Thanks for contributing an answer to Cross Validated!


                                                              • Please be sure to answer the question. Provide details and share your research!

                                                              But avoid



                                                              • Asking for help, clarification, or responding to other answers.

                                                              • Making statements based on opinion; back them up with references or personal experience.


                                                              Use MathJax to format equations. MathJax reference.


                                                              To learn more, see our tips on writing great answers.




                                                              draft saved


                                                              draft discarded














                                                              StackExchange.ready(
                                                              function () {
                                                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f394118%2fwhy-do-neural-networks-need-so-many-training-examples-to-perform%23new-answer', 'question_page');
                                                              }
                                                              );

                                                              Post as a guest















                                                              Required, but never shown





















































                                                              Required, but never shown














                                                              Required, but never shown












                                                              Required, but never shown







                                                              Required, but never shown

































                                                              Required, but never shown














                                                              Required, but never shown












                                                              Required, but never shown







                                                              Required, but never shown







                                                              Popular posts from this blog

                                                              is 'sed' thread safeWhat should someone know about using Python scripts in the shell?Nexenta bash script uses...

                                                              How do i solve the “ No module named 'mlxtend' ” issue on Jupyter?

                                                              Pilgersdorf Inhaltsverzeichnis Geografie | Geschichte | Bevölkerungsentwicklung | Politik | Kultur...