To me the fundamental difference is that AI is trained, algorithms are not. There's not training here, it's a simple frequency count looking for outliers. While it's an approach a human would take the human is doing it in a very different fashion. And the human is much more sensitive to form, this is much more sensitive to color.
They are definitely right that our (I am a hiker) gear tends to stand out against nature. Not only is it generally in colors that do not appear in any volume in nature, but almost nothing in the plant and mineral kingdoms is of uniform color. A blob of uniform color is in all probability either a monochromatic animal (the sheep their system detects) or man made.
What surprises me about this is that it hasn't been tried before.
This really gets at one of my issues with the term "AI". There is a very scientific, textbook definition of what Artificial Intelligence is however, this term carries baggage from sci-fi.
Using a term like "AI" to describe this is like using a term "Food" to describe pickles. Poor analogy but "AI" is just so vast that most lay readers or those not familiar with this phrase in regular computer science discussions aren't grounded in the consequence.
I feel that we as an industry need to do better and use terms more responsibly and know our audience. There is a big difference between a clustering algorithm that detects pixels and flags them and a conscious, self-aware system. However both of those things are "AI" and both have very different consequences.
Sure there is training - most few practical algorithms have dozens of tunable parameters - bucket size, thresholds, camera settings, image normalization settings and so on. It may not be 175 billion weights, but this still needs plenty of training data.
I've participated in hobby robot competition in the past, which required simple-sounding vision part: find a bright orange object on a green grass in bright sunlight, and very roughly estimate distance. We had to get 200+ training images and manually label each of them to get any sort of decent performance.
This is the list of discussion topics from the Dartmouth Workshop on Artificial Intelligence (1955) where the term was first introduced:
The following are some aspects of the artificial intelligence problem:
1 Automatic Computers
If a machine can do a job, then an automatic calculator can be programmed to simulate the machine. The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have.
2. How Can a Computer be Programmed to Use a Language
It may be speculated that a large part of human thought consists of manipulating words according to rules of reasoning and rules of conjecture. From this point of view, forming a generalization consists of admitting a new word and some rules whereby sentences containing it imply and are implied by others. This idea has never been very precisely formulated nor have examples been worked out.
3. Neuron Nets
How can a set of (hypothetical) neurons be arranged so as to form concepts. Considerable theoretical and experimental work has been done on this problem by Uttley, Rashevsky and his group, Farley and Clark, Pitts and McCulloch, Minsky, Rochester and Holland, and others. Partial results have been obtained but the problem needs more theoretical work.
4. Theory of the Size of a Calculation
If we are given a well-defined problem (one for which it is possible to test mechanically whether or not a proposed answer is a valid answer) one way of solving it is to try all possible answers in order. This method is inefficient, and to exclude it one must have some criterion for efficiency of calculation. Some consideration will show that to get a measure of the efficiency of a calculation it is necessary to have on hand a method of measuring the complexity of calculating devices which in turn can be done if one has a theory of the complexity of functions. Some partial results on this problem have been obtained by Shannon, and also by McCarthy.
5. Self-lmprovement
Probably a truly intelligent machine will carry out activities which may best be described as self-improvement. Some schemes for doing this have been proposed and are worth further study. It seems likely that this question can be studied abstractly as well.
6. Abstractions
A number of types of ``abstraction'' can be distinctly defined and several others less distinctly. A direct attempt to classify these and to describe machine methods of forming abstractions from sensory and other data would seem worthwhile.
7. Randomness and Creativity
A fairly attractive and yet clearly incomplete conjecture is that the difference between creative thinking and unimaginative competent thinking lies in the injection of a some randomness. The randomness must be guided by intuition to be efficient. In other words, the educated guess or the hunch include controlled randomness in otherwise orderly thinking.
So, no, the fundamental difference is not that "AI is trained, algorithms are not". Some hand-crafted algorithms fall under the purview of AI research. A modern example is graph-search algorithms like MCTS or A*.
I mean, if something as traditional as simple clustering is AI, then so is linear regression and Excel Sheets have been doing AI/ML for the past 2 decades.
At some point we just have to stop with the breathless hype. I'm sure labelling it as AI gets more clicks and exposure so I know exactly why they do it. Still, it's annoying.
At least until recently any introductory machine learning course would teach linear regression and clustering, the latter as an example of unsupervised learning.
Back then we still called things image classifiers or machine learning, and when you said AI most people probably had an image of Arnold Schwarzenegger or Cortana flash in their mind.
To me the fundamental difference is that AI is trained, algorithms are not. There's not training here, it's a simple frequency count looking for outliers. While it's an approach a human would take the human is doing it in a very different fashion. And the human is much more sensitive to form, this is much more sensitive to color.
They are definitely right that our (I am a hiker) gear tends to stand out against nature. Not only is it generally in colors that do not appear in any volume in nature, but almost nothing in the plant and mineral kingdoms is of uniform color. A blob of uniform color is in all probability either a monochromatic animal (the sheep their system detects) or man made.
What surprises me about this is that it hasn't been tried before.
You are confusing AI and Machine Learning, the latter being a subset of the former.
This really gets at one of my issues with the term "AI". There is a very scientific, textbook definition of what Artificial Intelligence is however, this term carries baggage from sci-fi.
Using a term like "AI" to describe this is like using a term "Food" to describe pickles. Poor analogy but "AI" is just so vast that most lay readers or those not familiar with this phrase in regular computer science discussions aren't grounded in the consequence.
I feel that we as an industry need to do better and use terms more responsibly and know our audience. There is a big difference between a clustering algorithm that detects pixels and flags them and a conscious, self-aware system. However both of those things are "AI" and both have very different consequences.
Sure there is training - most few practical algorithms have dozens of tunable parameters - bucket size, thresholds, camera settings, image normalization settings and so on. It may not be 175 billion weights, but this still needs plenty of training data.
I've participated in hobby robot competition in the past, which required simple-sounding vision part: find a bright orange object on a green grass in bright sunlight, and very roughly estimate distance. We had to get 200+ training images and manually label each of them to get any sort of decent performance.
This is the list of discussion topics from the Dartmouth Workshop on Artificial Intelligence (1955) where the term was first introduced:
From:
https://web.archive.org/web/20070826230310/http://www-formal...
So, no, the fundamental difference is not that "AI is trained, algorithms are not". Some hand-crafted algorithms fall under the purview of AI research. A modern example is graph-search algorithms like MCTS or A*.
A* is a miss on 3, 5 and 7 at minimum.
Novel stuff is AI, old stuff is statistics. Decision trees used to be called AI :)
I mean, if something as traditional as simple clustering is AI, then so is linear regression and Excel Sheets have been doing AI/ML for the past 2 decades.
At some point we just have to stop with the breathless hype. I'm sure labelling it as AI gets more clicks and exposure so I know exactly why they do it. Still, it's annoying.
At least until recently any introductory machine learning course would teach linear regression and clustering, the latter as an example of unsupervised learning.
Sure, but as a stepping stone.
There is no model here, there is no neural net.
Yes! AI is any sort of machine intelligence and its been around for more than 2 decades, the 80s even had its own "AI winter" after all.
There is no intelligence here, only pattern matching.
1 reply →
You're only saying this because we're in a hype cycle. Circa 2018, there was no problem at all with calling this AI: in fact, it was normal.
Back then we still called things image classifiers or machine learning, and when you said AI most people probably had an image of Arnold Schwarzenegger or Cortana flash in their mind.
it was not.