Tech Giants Are Paying Huge Salaries For Scarce AI Talent (santafenewmexican.com) 156
jmcbain writes: Machine learning and artificial intelligence skills are in hot demand right now, and it's driving up the already-high salaries in Silicon Valley. "Tech's biggest companies are placing huge bets on artificial intelligence (Warning: may be paywalled; alternative source)," reports the New York Times, and "typical AI specialists, including both Ph.D.s fresh out of school and people with less education and just a few years of experience, can be paid from $300,000 to $500,000 a year or more in salary and company stock." The New York Times notes there are several catalysts for rocketing salaries that all come down to supply and demand. There is competition among the giant companies (e.g. Google, Facebook, and Uber) as well as the automative companies wanting help with self-driving cars. However, the biggest issue is the supply: "Most of all, there is a shortage of talent, and the big companies are trying to land as much of it as they can. Solving tough A.I. problems is not like building the flavor-of-the-month smartphone app. In the entire world, fewer than 10,000 people have the skills necessary to tackle serious artificial intelligence research, according to Element AI, an independent lab in Montreal."
The solution is self obvious (Score:1)
Re: (Score:3)
Read Godel much? (Umlat the o)
Re: (Score:2)
Read Godel much? (Umlat the o)
Ök.
Re: (Score:2)
A sufficiently talented AI 'researcher' should be able to code a self-aware AI researching AI.
It's weird enough for AI specialists to deal with the fact that they're coding the demise of thousands of human jobs. Don't make it worse by enforcing the fact that they will likely eventually join their unemployable brethren...
Re: (Score:2)
They aren't coding the demise of jobs, they are coding a change to the economy. as our current system won't work if no one has jobs. so it means that the entire system will need to change
As companies work to replace human cashiers with automated tellers, tell me again how the CEO gives a shit about the "entire system"? This is but one example. Rinse and repeat as automation drives forward.
That ever-widening gap between the 1% and the 99% represents a growing I-don't-give-a-shit mentality. The masses will eventually be unemployable, driven by the insatiable greed of the wealthy elite who could care less about long term consequences. They can't even see past the next fiscal quarter.
Since
Re: (Score:2)
I thought this was an article of people getting paid $300-$500k/year for jobs. Well within the 1%. That's more than the president of my company made. Not all CEOs are billionaires.
I wasn't referring to the ones writing code. I was referring to the impact of that activity, and the risk of society not adopting quickly enough.
AI is just brute force and speech-to-text (Score:1)
We don't have strong AI yet. There aren't 10,000 people that know to get to strong AI, there are currently 0. For our current AI, anyone can develop it, and you just need marketing to announce it as the next big thing.
Re: (Score:2)
There aren't 10,000 people
Back in the prehistoric days of /., I remember reading something from I think Linus about Linux saying something along the lines of...when Microsoft started, there were fewer than 1,000 people who knew what an operating system was and they had a limited talent pool to choose from. Now hundreds of thousands, millions, of people do
Easy solution (Score:2)
Resources (Score:2)
That is true of all specialities.... (Score:5, Interesting)
In the entire world, fewer than 10,000 people have the skills necessary to tackle serious artificial intelligence research, according to Element AI, an independent lab in Montreal
In the entire world, fewer than 1000 people have the skills necessary to do unstructured tetrahedral finite element mesh generation. It is possible there are fewer than 1000 people who have the skills necessary to understand what exactly we mesh makers do. And, Surprise! there is demand for fewer than 1000 people to write unstructured tetrahedral finite element mesh generation. And far fewer than 1000 people are needed to manage them.
I am glad the periodical bubbles that infect Wall Street and venture capitalists benefits PhDs once in a while. Most of the time it benefits hedge fund monkeys or stock market cheats or lottery winners with delusions of grandeur or plain sociopaths. Happy for my grad school classmates. Enjoy the windfall while lasts, Ramachandran\s, Yang\s, Hsu\s, Gupta\s, Parpia\s and Wickramasinghe\s.
curvature as captive starch (Score:5, Insightful)
Nice rhetoric—factual statement masquerading as metaphor, for any reader dumb enough to go along for the ride.
The Evolution of the Flour Mill from Prehistoric Ages to Modern Times [angelfire.com] — 1905
That's about the present state of machine learning, the hand-crafting of "features" playing the role of the recently discarded flat blocks.
Wheat is an incredible dietary resource, with the starch being light enough to transport over long distances, if only one can find a way to remove it (contrast potatoes, only ever transported downhill, if at all, until the invention of steam power). Once upon a time, all food was local, as, too, was starvation (fear the blight).
A better method to mill the world's vast stores of accumulated data is a big deal, even if we remain in the relatively crude era of water-powered stone grinding wheels.
Data is a bit like wheat, it doesn't give up its curvature easily. Too much applied force creates heat and destroys the end product. The applied force must have exactly the right ratio of compressive to shear stress, which only an expert miller can judge. Deep learning is nothing more than a slightly better mill than the one we had before, and it ranks right up there beside becoming slightly better at milling wheat.
The economic value of the curvature we can now hope to unlock is quite large. And probably there's a lot of curvature yet to find that remains inaccessible to current methodology.
Data is oil. Data is also wheat.
By way of contrast, unstructured tetrahedral finite element mesh generation shaves 5% of the metal mass off a milling apparatus that already worked just fine, being just one of ten thousand noisy specializations in the great roil of small improvements where a penny shaved is a penny earned.
Nevertheless, apparently a great career option for the metaphorically challenged.
Re:curvature as captive starch (Score:4, Funny)
Data is a bit like wheat, it doesn't give up its curvature easily. Too much applied force creates heat and destroys the end product. The applied force must have exactly the right ratio of compressive to shear stress, which only an expert miller can judge.
Deep, dude... deep. Have you considered writing Slashdot summaries?
Re: (Score:1)
Wheat is an incredible dietary resource
If by that you mean wheat has been found to be a highly potent anti-nutrient linked with a range of digestive and neurological disorders, you'd be right.
Too smart for your own good (Score:2, Interesting)
I know a guy who cut and pastes Javascript snippets to make interactive query windows for websites. His main job is as a creative director at a mid-tier advertising agency. He calls his work 'AI research'. I am not kidding - he makes over $100k per year.
The biggest problem I have found with smart people, is they don't think stupid people should be paid lots of money for work they think is simple. The more successful ones have figured out that it is much better to cash in on such situations rather than lamen
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Hurray for the gostak!
Greetings Professor Falken (Score:3)
Greetings Professor Falken!
My friend with 25 years of relevant experience. (Score:2)
Hello [UserName], my name is doctor Sbaitso.
I am here to help you.
This IS what a shortage looks like! (Score:1)
When companies start paying like that, that's indicative of shortage.
Remember this folks the next time some company claims they can't get enough qualified people.
I just got my PhD (Score:2, Informative)
I just finished my PhD in neural networks and AI, and I can say that this article comes up a bit short. I actually got offered $625K (granted it's San Jose, so it's more like $150K anywhere else in the country)... I turned it down because San Jose sucks, so I took a job at a company on the east coast instead for $195K.
AI-related jobs are certainly at a premium right now as companies scramble to get rid of humans and replace them with robots.
Re:I just got my PhD - me too! (Score:2, Funny)
I just got mine in gender studies and I have been writing papers on how AI is the projection of masculine hegemony, domination, and misogyny into the computing sphere. And how it will add to global warming and increase the relevance of the penis social construct in masculine psychology and subjugating other genders including trans and cis genders.
It's a great job. I write gibberish, teach classes to sanctimonious well-to-do white kids, and get paid almost 6 figures - for working about 4 hours a week.
Re: (Score:2)
Pure BS (Score:5, Insightful)
Modern AI software isn't that complicated and not nearly as expensive to get people in. Look at job offers: $150k for AI research scientists in NYC. $65k in more rural areas. That's not well paid by definition at all. Sure, a pure AI scientist gets paid $500k just like a top neuroscience scientist gets paid $500k or a top biology researcher, but the majority of companies do not want to do the theoretical development of AI, any regular programmer can wrap their heads around the existing literature and build something.
Here in my area, there are a number of employers looking for AI engineers/scientists. They pay about what I make as a non-AI IT sysadmin, which is given my experience on the higher scale but by no means exceptional.
What Google and co wants is a glut of people 4-6 years from now that are "trained" in AI from college. You put out a report like this, you get massive amounts of people applying for the schools that offer programs and 5 years from now you have an over-abundance of people driving down overall wages. You also get to hire a bunch of people on H1B because the "US doesn't have the skillz" and you end up with a bunch of programmers on H1B under the guise of AI development.
Re: (Score:3)
Modern AI software isn't that complicated and not nearly as expensive to get people in. Look at job offers: $150k for AI research scientists in NYC. $65k in more rural areas. That's not well paid by definition at all. Sure, a pure AI scientist gets paid $500k just like a top neuroscience scientist gets paid $500k or a top biology researcher, but the majority of companies do not want to do the theoretical development of AI, any regular programmer can wrap their heads around the existing literature and build something.
Here in my area, there are a number of employers looking for AI engineers/scientists. They pay about what I make as a non-AI IT sysadmin, which is given my experience on the higher scale but by no means exceptional.
What Google and co wants is a glut of people 4-6 years from now that are "trained" in AI from college. You put out a report like this, you get massive amounts of people applying for the schools that offer programs and 5 years from now you have an over-abundance of people driving down overall wages. You also get to hire a bunch of people on H1B because the "US doesn't have the skillz" and you end up with a bunch of programmers on H1B under the guise of AI development.
Methinks that you are spot on. They're trying to build a glut and drive salaries down. What's going to happen when AI becomes so simple that the home-based small business can do it? This is kind of a repeat of the days when companies were "scrambling" for people with MCSEs and Novell Certified Engineers. The good times lasted a short while, the bubble burst, and there was a glut of trained folks. For a while in the early 2000s, I remember that employers looking for mere help desk talent were requiring peopl
Re: Pure BS (Score:2)
Home based business can already do AI. Iâ(TM)ve built an âoeAI systemâ for advertisement optimization using crowd tracking from open source libraries and some Python glue in a matter of hours. The thing âoelearnsâ and can âoepredictâ demand very rudimentary. I built something similar about a decade ago for systems monitoring and although it ended up being just fed random data, it could predict hard drive failures rather accurate simply based on peripheral data.
Pure AI does
Re: (Score:2)
That always happens. When it gets simple enough, people call it by the name of a technique and it ceases to be AI. In the meantime, there's always harder problems to work on, which will be considered AI for some time to come.
Re: (Score:2)
So what is the alternative? If you have a skills shortage you should try to hush it up, and whatever you do don't advertise well paid positions with those skills because it might cause people to acquire them and drive down wages in 5 years time.
This is how it has always worked. New skills come into demand due to advances in technology or changes in society. Initially they are highly paid, eventually they become common and less well paid. No point complaining about it, you just have to keep developing new sk
Re: (Score:2)
My point is not about AI jobs (which don't quite exist yet), those are fine and will go through the process naturally, the problem here is that Google and co is guising regular programming jobs as 'working on AI' jobs and then claiming a shortage.
The other point I was trying to make is that there is no actual shortage of "AI research people" to work on self-driving cars and image recognition, we've figured the theory behind that out, the implementation however requires some programming people and here again
Not true (Score:1)
This is spin. They are trying to get more entrants into the field. I am in the field, and I don't know anyone making more than ~120k/year, which is mean for CS workers in Silicon Valley.
Re: (Score:2)
They are trying to get more entrants into the field
And lobby for an increase in the H1B visa cap.
Don't trust this! (Score:2)
Another bubble? (Score:3)
Re: (Score:1)
The trick is how to profit from a bubble. If the big co's buy up all the AI co's, then you cannot put stock "puts" on them to gain when they crash. An AI crash won't affect the big co's that much.
There is no cake (Score:2)
""Tech's biggest companies are placing huge bets on artificial intelligence (Warning: may be paywalled;"
And also no paywall, if you delete your cookies after the allowed number of articles.
These 'paywalls' are a joke.
Re: (Score:2)
""Tech's biggest companies are placing huge bets on artificial intelligence (Warning: may be paywalled;"
And also no paywall, if you delete your cookies after the allowed number of articles.
These 'paywalls' are a joke.
Security should be provided at an appropriate level given the value of the object being protected... ;^)
Given the articles they are protecting, I think they are doing a bang-up job...
Slashdot article confounds unrelated things (Score:2)
Author writes "AI Talent", then refers to Machine Learning.
Machine Learning refers to a few statistical regression techniques that is not what artificial intelligence is about.
Artificial Intelligence is more of a research field, and the concept of human intelligence remains an unsolved problem ---
what it sounds like they are really hiring are hard-core Computer Scientists with some experience attacking real-world solution-finding problems like extracting useful intelligence from data, classifying or ide
Huuuge! (Score:2)
They should hire me for a huge salary...
I did an AI course as part of my BSc Computer Science back in the 90's.
For a project I made an "Expert System" (or was it called a Smart System back then), wherein I wrote a front end in VB6 (might have been 4), that was attached to a Access DB that contained all the beers on tap (I think there were 30ish) at one of our favorite drinking establishments, along with all of their characteristics, that prompted the user with a bevy of questions to determine and suggest wh
Re:Testing Phase (Score:3)
I bet you spent a good bit of time on the "testing phase" to make sure it really worked.
Re: (Score:2)
Ah, but do you have experience writing an application based on CNN running on NVidia DGX-1 while working at a research lab working with supercomputing where you were mentored by a leading expert in the field. If not, you're not qualified.
Re: (Score:1, Interesting)
I am sure "Element AI" wants to pretend there is such a thing as AI, but there isn't. Playing "Go" is not "AI" and neither is autonomous driving. If you are going to start calling computer algorithms and programs "AI" then everything that runs on computer "AI".
This.
Re: (Score:3, Insightful)
This is a perfect example of companies (HR) searching for stuff that doesn't exist.
No doubt all of their job postings require 5 more years experience in AI than it's actually been around.
While there is undoubtedly a shortage of talent, odds are that the industry is screening out many who do have AI experience, but fail to meet the rigid and ignorant requirements HR is looking for.
Re: (Score:2)
No doubt all of their job postings require 5 more years experience in AI than it's actually been around.
That would 66 years of experience, then, right?
Re: (Score:3)
No doubt all of their job postings require 5 more years experience in AI than it's actually been around.
That would 66 years of experience, then, right?
Well, that brings it into the same scale as the claim that the stock is worth $500k. I read "from $300,000 to $500,000 a year or more in salary and company stock" and I hear, "$32k and a box of scratch paper." But in my experience, most of these companies won't actually have $32k to pay out and will try to bid that down with more toilet paper.
Established companies whose stock has value are going to want to pay employees using cash, and they'll have no trouble finding experts for $200k because any competent
Re: (Score:2)
any competent software engineer can become an AI specialist in a few weeks. Most of these jobs are just standard software development using an API, after all
We then have an entirely different view of what "AI specialist" means, then. That's more like "AI tinkerer" to me. (For that matter, would someone passing an SQL crash course qualify as "a database specialist"?) Now I'm not saying that you can't possibly go through Winston or Norvig in a few weeks if you're dedicated enough and have spare time to throw at it, but sometimes deciding on techniques to use is a matter of experience that you probably won't gather in a few weeks.
Re: (Score:2)
Right, I used the word meanings that make sense in the context; the meaning of "specialist" as it relates to job listings.
You're using a definition that is from a different context, apparently because you find that other context to be more important. The reality though is that the words doesn't have meaning outside of a context.
The important thing is what type of work they're doing, are they using an API or creating one? That is what tells you which definition to use.
Re: (Score:1)
That reminds me of a job ad I saw back in 1996 for a Java developer. The ad stated that the applicants needed to have 10 years of experience in Java development. However, at that time, Java had only been around for a short time. A friend of mine commented that even the people who created Java wouldn't be qualified.
The stupidity of HR in the tech world never ceases to amaze me.
Re: (Score:2)
They're asking for 10 years experience bullshitting more experience than you've got.
I'm massively overqualified...the trick is being able to learn fast enough to backfill.
Re: (Score:2)
Usually they receive resumes, look at the most experience in any field that any one person has, the lowest salary that someone was earning, then generate the perfect candidate profile. So if someone sprinkles in some porkies into their resume, that gets added to the perfect candidate profile. HR also start recruiting by trying to find project managers and architects first (10+ years experience), then the tech leads, team leaders, senior engineers, then finally engineers and interns. So they want a JAVA arch
Re: (Score:1)
Holy shit. I clicked this article thinking, "I wonder how long before the butthurt 110010001000 comes along, shunned because of his pathological inability to think beyond his egotistical delusions about how he thinks computers should work, and claims there's no such thing as AI"
Ever think of going outside?
Re: (Score:3)
From my investigation of the matter it looks to be some sort of multi-variate analysis in drag. Uninteresting. Basically you get guys sitting around twiddling knobs. Finding the right parameters which works for a little bit and then you start knob twiddling again to find the next ones.
Some years back I wrote a day trading program for a friend. It dynamically changed its behavior depending on the market signals and the rules he gave it (stops, buys, shift to a different stock etc.) which he found useful. Now
Deep nets (Score:5, Interesting)
From my investigation of the matter it looks to be some sort of multi-variate analysis in drag. Uninteresting. Basically you get guys sitting around twiddling knobs. Finding the right parameters which works for a little bit and then you start knob twiddling again to find the next ones.
Except for 1 key difference. With Deep-Neural-Nets, the knobs twiddle themselves alone.
DNN get inspiration of how some neural network work in the nature (e.g.: a column in the primary visual cortex of the brain) to design thing that you can throw at problems, and which will autonomously train themselves.
Some years back I wrote a day trading program for a friend. It dynamically changed its behavior depending on the market signals and the rules he gave it (stops, buys, shift to a different stock etc.) which he found useful. Now that was fun.
These older program require you to have precise criteria in advance.
That works perfectly well with clearly codified problem - the friend has a clear set of rules that need implementation.
That completely fails for more vague problem ("detect a face") - it would be possible in theory to design a set of rules that can detect a face - a Haar Cascade [opencv.org]. But designing such set of rules is extremely complex and cumbersome. And each time you need something new ("detect if there's a bird"), you would need to repeat all the hard work to invent yet another set of rules.
At that point, better take an advice from how mother nature solved the problem (by using stacks of neural network in a columns) and simply throw a DNN (e.g: a Convolution Neural Net - a ConvNet) at the problem, and watch it self organize and come up with a solution to your problem.
It's the modern-day equivalent of training pigeons to peck a city images to steer a missile [wikipedia.org].
Re: (Score:2)
Interesting read. I hadn't heard of Haar wavelets
Re: (Score:2)
the knobs twiddle themselves alone.
So how is this different from adaptive control that uses on-line identification of the process parameters, or modification of controller gains, thereby obtaining strong robustness properties. Adaptive controls were applied for the first time in the aerospace industry in the 1950s, and have found particular success in that field. Adaptive control [wikipedia.org].
Common methods of estimation include recursive least squares. Hmm,who knew I had thirty years of AI experience.
Re: (Score:2)
It's not that different. A lot of current machine learning can be viewed as tractable (often convex) and intractable optimization. Probably a big difference is the size of the pro
Re: (Score:2)
Anyway, this stuff is 70+ years old and the mathematics is well understood and common.
Re: (Score:2)
Scale is big because they didn't have the resources to solve these problems in the past. Now that they have the resources, new issues and problems arise.
As for your inductive argument, there are many cases where it is not true. Many machine learning algorithms suffer from the curse of dimensionality. Too many parameters and the results are worthless. One needs to use things like L1 regularization which is related to com
Pidgeons vs helicopter (Score:2)
Why is scaling a big thing. If it works with N parameters, it will work with N+1.
One of the reasons is complexity of what the parameters describe and how they can be understood by the researcher.
The way it was done in aeronautics, could be compared to piloting a helicopter :
- You have a position that you want to keep (easily described with 6 number giving coordinate and direction pointed to).
- You have a bunch of controls (Cyclic, Collective, Anti-Torque, Throttle) with each subtly influencing each-other because of gyro effect.
You optimise the main parameters (controls), and maybe you h
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Except for 1 key difference. With Deep-Neural-Nets, the knobs twiddle themselves alone.
What does that even mean? Humans set them their tasks, and their success criteria, and humans ultimately decide if they did the task right.
Re: (Score:3, Insightful)
I am sure "Element AI" wants to pretend there is such a thing as AI, but there isn't. Playing "Go" is not "AI" and neither is autonomous driving...
AI tends to drive a belief that modeling intelligence perfectly is a necessary requirement when it is in fact artificial. Reality dictates the bar is much lower for adoption. Good enough AI specialists will create good enough AI. Autonomous cars don't have to be perfect. They merely have to do better than humans. 40,000 vehicular deaths per year just in the US tends to set the good enough bar pretty damn low. AI will do the same.
Don't want to call it AI? OK, fine. Massive Disruption has a catchy mar
Re: (Score:2, Insightful)
Autonomous cars don't have to be perfect. They merely have to do better than humans. 40,000 vehicular deaths per year just in the US tends to set the good enough bar pretty damn low. AI will do the same.
Actually, if you want people to accept autonomous cars, you will have to do *MUCH* better than humans. Think of the civil lawsuits and subsequent damages if a computer-driven car kills someone...
Re: (Score:3)
Re: (Score:3)
I think you have mis-under-estimated the ability of humans to make seriously fucked up decisions.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Autonomous cars don't have to be perfect. They merely have to do better than humans. 40,000 vehicular deaths per year just in the US tends to set the good enough bar pretty damn low. AI will do the same.
Actually, if you want people to accept autonomous cars, you will have to do *MUCH* better than humans. Think of the civil lawsuits and subsequent damages if a computer-driven car kills someone...
Oh you mean like the massive lawsuits and huge punishments when a smartphone-distracted human kills someone? Oh, wait...nevermind. We don't have any of that shit happening after countless distracted driving injuries and deaths.
Rest assured liability will be buried under the "I Agree" button on the dashboard of the autonomous car.
Re: (Score:2)
I disagree with this. If you're driving a car and cause a fatal crash, you are responsible for your actions. But if a piece of software is driving the car, then surely you can't be held liable. Liability lies with the car or driving software manufacturer, there's no license agreement that can hold in court over that.
Re: (Score:2)
Think of the civil lawsuits and subsequent damages if a computer-driven car kills someone...
I'm not sure why you think this is a problem. Just like people-driven cars, there will be insurance companies that will pick up the tab for accidents.
Re: (Score:2)
They merely have to do better than humans. 40,000 vehicular deaths per year just in the US tends to set the good enough bar pretty damn low.
Not really. Not low enough to avoid massive liability.
When a human kills another in an accident, that's one death. And if the at-fault party dies in the accident as well, who are you going to sue? If a companies build autonomous cars, each model could be liable for thousands of deaths. And the responsible parties are sitting, alive and well, in some fancy corporate office. On top of a large pile of stock. Autonomous cars are an injury lawyer's wet dream.
Re: (Score:2)
They merely have to do better than humans. 40,000 vehicular deaths per year just in the US tends to set the good enough bar pretty damn low.
Not really. Not low enough to avoid massive liability.
When a human kills another in an accident, that's one death. And if the at-fault party dies in the accident as well, who are you going to sue? If a companies build autonomous cars, each model could be liable for thousands of deaths. And the responsible parties are sitting, alive and well, in some fancy corporate office. On top of a large pile of stock. Autonomous cars are an injury lawyer's wet dream.
Rest assured liability will be buried under the "I Agree" button on the dashboard of the autonomous car. And consumers will happily agree to it to not be burdened with driving.
Re: (Score:2)
to not be burdened with driving.
A lot of people would rather drive themselves. Particularly when word gets out that clicking the "I Agree" button will make them liable for the AI's screw-ups.
Re: (Score:2)
You can't waive someone else's right to sue the manufacturer.
Re: (Score:2)
If you ever look into this thing called "automobile liability insurance" that we have in the USA, you might realize it made your argument moot decades ago.
Re: (Score:2)
If a companies build autonomous cars, each model could be liable for thousands of deaths
On the other hand, every car involved in an incident will have detailed sensor logs of the event, and the company has a chance to fix the problem and update the software in all the cars of the same model to avoid the same type of incident in the future.
Humans will keep making the same mistakes over and over again....
Re: (Score:2)
and the company has a chance to fix the problem
I don't know how you are going to 'fix' homeless people wandering out into the middle of the road with a software patch.
Re: (Score:2)
A human driver may be more likely to miss the idiot pedestrian under some circumstances. I've picked out people on the sidewalk that I somehow knew were going to dash across the street in front of me against the light.
Re: (Score:2)
Re: (Score:2)
Yes, leave AI to die on the LISP machines at MIT, where it belongs!
Re: (Score:2)
it's not "AI" and that's my pet peeve
Mine too. These young whippersnappers with their fancy neural networks can kiss my symbolic program's ass.
Re: (Score:2)
Re: (Score:2)
It just if you grew up in the 80's, it was beaten into you that a CPU is not intelligent whereas today people claim it is.
A CPU isn't intelligent, it only thinks it is.
Re: (Score:2)
> A CPU isn't intelligent, it only thinks it is.
“Don't anthropmorphize computers—they hate that!”
Well I wouldn't want to hurt its feelings.
Re:More AI hype (Score:5, Interesting)
You should at least try to understand what you are talking about. Deep learning, aka neural networks, are not "algorithms and programs". They are part of the machine learning branch of AI. The computer is not programmed but learns by itself. People in computer science have been trying to do that for as long as computers have been around but were never quite successful until about 2012. Deep learning excels at tasks which are too complicated for humans to write code such as detecting objects in picture and analyzing recordings of voices or translating text. This is revolutionary. Even the primitive neural net technology we currently have will transform many applications in the next few years, in that they perform much better than what humans used to code and they require just a handful of AI specialists to train instead of team of 100 programmers. If the technology continues to improve, it could take over just about every field: driving, medicine, law, manufacturing, etc. But the current technology has limitations and it's not clear how much it can progress further.
Computer programmer could be one of the first job to be made obsolete by deep learning. Programmers will have retrain themselves as teachers to neural nets instead.
Re: (Score:2, Insightful)
Programmers will have retrain themselves as teachers to neural nets instead.
The sky is not falling here. Writing code is the easy part. The hard part is conducting meetings among business analysts and executives and documenting specifications and iteratively building those specifications against their expectations. The magical part is knowing the difference between what is asked for and what the customer really needs - all the while layering against the business domain. An AI won't be able to do this no sooner than it could replace an intelligent person with good social skills and
Re: (Score:3)
Establish that. There's a century of Disney movies and saturday morning cartoons that still haven't reached consensus at what "being human" means.
You might as well say "computers aren't better at Love".
This is the same reason we have pundits discussing what a driverless car does when "choosing" a human to die. Computers don't know "die", or "risk", or any mental constructs. You get it all the time with the PB&J project, people think "then spread jelly" is an instruction, but a computer has no idea what
Re: (Score:2)
You get it all the time with the PB&J project, people think "then spread jelly" is an instruction, but a computer has no idea what the fuck "spread" means, or what a "knife" is. Computers don't suffer from the trolley problem, they have an "abnormal road condition" or "obstacle condition" (ie human ahead) and a routine to attempt.
But they know what a "normal road condition" is so they can tell it's an abnormal road condition? How is "make a new dialog" a computer instruction, though I instruct it to do that? Does a non-English person understand that "then spread jelly" is an instruction? Do you actually have a formal definition of what "spread" means yourself? How do you think Watson beat people in Jeopardy or assistants like Siri works? Nobody cares if the computer "understands" what jelly is, as long as it recognizes the facets th
Re: (Score:1)
Except that I have yet to see a deep learning machine that is nothing more than pattern matching. A few add in optimization, but that's it. None of them modify the optimization criteria, or decide what to do based on the pattern matching. None of them create new problems to solve or ideas. The hardest part of programming is not actually creating the program, or writing code. It's deciding what the problem is, and what's worth the cost of implementing. Once that is down, the rest is easy. Replacing programme
Re: (Score:2)
This is revolutionary.
Back in the 80's I worked on a system to analyze customer requirements to verify that there were no conflicting requirements, both internally or legally. This was because the requirements took up a small room full of documents. As part of the analysis process, this program also spit out compile-able and executable code.
Not that revolutionary.
Re: (Score:2)
This is revolutionary.
Back in the 80's I worked on a system to analyze customer requirements to verify that there were no conflicting requirements, both internally or legally. This was because the requirements took up a small room full of documents. As part of the analysis process, this program also spit out compile-able and executable code.
Not that revolutionary.
Okay, so use that approach to build a self-driving car.
Re: (Score:2)
Well, people are confusing machine learning with AI. Machine learning is a subset of AI but not at the level of AI. Sadly, it is all about advertising that turns the word into something bigger than its meaning nowadays. Thus, the meaning of the word AI is now reduced to just making faster decision on tasks that human can do. Would the decision be better? Yes, but not always because the computer based on what it is fed (input). Often time, it is garbage in, garbage out. The application is still limited. And
Re: (Score:2)
twist the meaning of the word AI...
Maybe it's your definition that is twisted. Here's a dictionary definition of intelligence: "the ability to acquire and apply knowledge and skills.".
Applying knowledge and skills is within the grasp of a neural net, so it makes sense to call it intelligent. That doesn't mean it can do everything that a human being can do.
Re: (Score:2)
Applying knowledge and skills is within the grasp of a neural net
It's also within the grasp of a couple of balls [wikipedia.org].
Re: (Score:2)
The balls don't acquire or apply knowledge and skills, but apart from that it's exactly the same, yes.
Re: (Score:2)
Hype is Hype until it becomes reality (Score:2)
I would like to pose a different perspective. Yes you are correct in this being an algorithm approach which is being called A.I. I think though this is a matter of speaking what doesn't exist into existance by placing focus, and as a result, resources, and as a result problem solving, and most importantly determination. Now there are people like you who glory in pointing out the discrepancy, kudos.
Re: (Score:2)
and neither is autonomous driving
Very true. Since AI is about teaching machines to exhibit behavior that we'd call "intelligent" when exhibited by humans, and the presence of any intelligence is demonstrably not a requirement for many drivers, autonomous driving doesn't have to exhibit intelligence either.
Re: (Score:2)
the summary you linked says $115000 is considered low income in an area with high cost of living. Is 3 times that still low income?