Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
AI Businesses Software The Almighty Buck Technology

World's Largest Hedge Fund To Replace Managers With Artificial Intelligence (theguardian.com) 209

An anonymous reader quotes a report from The Guardian: The world's largest hedge fund is building a piece of software to automate the day-to-day management of the firm, including hiring, firing and other strategic decision-making. Bridgewater Associates has a team of software engineers working on the project at the request of billionaire founder Ray Dalio, who wants to ensure the company can run according to his vision even when he's not there, the Wall Street Journal reported. The firm, which manages $160 billion, created the team of programmers specializing in analytics and artificial intelligence, dubbed the Systematized Intelligence Lab, in early 2015. The unit is headed up by David Ferrucci, who previously led IBM's development of Watson, the supercomputer that beat humans at Jeopardy! in 2011. The company is already highly data-driven, with meetings recorded and staff asked to grade each other throughout the day using a ratings system called "dots." The Systematized Intelligence Lab has built a tool that incorporates these ratings into "Baseball Cards" that show employees' strengths and weaknesses. Another app, dubbed The Contract, gets staff to set goals they want to achieve and then tracks how effectively they follow through. These tools are early applications of PriOS, the over-arching management software that Dalio wants to make three-quarters of all management decisions within five years. The kinds of decisions PriOS could make include finding the right staff for particular job openings and ranking opposing perspectives from multiple team members when there's a disagreement about how to proceed. The machine will make the decisions, according to a set of principles laid out by Dalio about the company vision.
This discussion has been archived. No new comments can be posted.

World's Largest Hedge Fund To Replace Managers With Artificial Intelligence

Comments Filter:
  • by Anonymous Coward
    yay! It is about time people on Wall Street got replaced by robots. Do you like apples?
    • Particularly CEOs and the finance department. After all, the beancounter approach of the payroll guys - 'why can't we just let go of these 500 people and hire 10,000 in Bangalore for half the price?' is something an AI can do just as easily, just like identifying which head does every line item fall into. Why do we need to pay somebody to do that when AI can do it for free?

      And hopefully, the AI payroll declines to pay out a golden parachute while dropping these princes and princesses from cloud 9

  • Roles finally reversed, carbon units now working for the AI. On the positive side, puny humans are still allowed to live for as long as it will take to replace them with as-efficient robots for the dirty jobs.
    • by Anonymous Coward on Saturday December 24, 2016 @08:23AM (#53547695)

      I thought SkyNet was supposed to originate from the defense industry. Shoulda known that that isn't where the true evil lies ...

      • by hey! ( 33014 ) on Saturday December 24, 2016 @11:34AM (#53548167) Homepage Journal

        If we're going to do sci-fi references, why leave out Asimov? Terminator was a throwback to the kind of pulp robot-run-amok stories that Asimov almost single-handedly made obsolete.

        I've always thought that one consistent point in Asimov's robot stories that that robots are morally superior to humans -- at least if by "moral" you mean "principled". Ethics are literally baked right into Asimov's robots.

        A robotic COO could be programmed to serve shareholder interests in a way a human COO could not be. It could also be programmed to be law abiding to within its ability to interpret what the law requires. Perhaps most importantly, it won't have an inbuilt tendency to rationalization and wishful thinking. Of course you still can't trust the bastards that programmed the thing, but it will serve somebody faithfully, to the best of its abilities.

        • You don't program AI's, you train them.
    • by orlanz ( 882574 )

      What would an AI consider a dirty job?

      • by ffkom ( 3519199 )
        One in an environment hazardous to electronics and electromechanics, like in a jungle or on an ocean. For example, when robots were attempted to clean up the mess on the Tchernobyl reactor, their electronics failed quickly due to the intense radiation, so humans from all over the soviet union were recruited for that dirty job.
        • " their electronics failed quickly"

          Bah, nothing a vacuum tube and tunnel diode control system can't handle.

          • But those things would have to be plugged into something - I doubt that any amount of AAA batteries would enable them to work long enough to achieve a clean-up.
          • Or just use depleted boron [wikipedia.org] as your P-type semiconductor dopant, and stick to 100nm or bigger fabs. Since this is a fairly common requirement, you can buy such rad-hard products off the shelf.

        • Recruited is the wrong term.
          They where recruits and forced by gun point to clean up, or get court court martialed.
          Obviously most recruits where from "trouble zones" so they could kill two birds with one stone.

    • by vtcodger ( 957785 ) on Saturday December 24, 2016 @10:20AM (#53547971)

      After all these years, Clippy finally gets his big break!!!

  • Skynet! (Score:5, Funny)

    by Freischutz ( 4776131 ) on Saturday December 24, 2016 @08:20AM (#53547689)
    Skynet is real! ... and it runs a hedge fund? Bit disappointing if you ask me.
    • Re:Skynet! (Score:5, Interesting)

      by mark_reh ( 2015546 ) on Saturday December 24, 2016 @08:27AM (#53547709) Journal

      There are two ways to take control of the planet. You can destroy all opposition (and their possessions, factories, etc.) or you can get most of the money. You don't even need that much of it, especially if the population is stupid, as we saw in the last election. Look how little it took to take over the US. How much more will be needed to take the rest of the world?

      • There are two ways to take control of the planet. You can destroy all opposition (and their possessions, factories, etc.) or you can get most of the money. You don't even need that much of it, especially if the population is stupid, as we saw in the last election. Look how little it took to take over the US. How much more will be needed to take the rest of the world?

        Terminators wearing Armani suits, driving a Ferrari, carrying an iPad made of solid gold and slurping on a hundred dollar cup of cat shit latte?? ... no still not feeling it.

  • by Anonymous Coward

    2016: Ray Dalio commissions AI to make business decisions in his place.
    2017: Business AI makes business decision, finds Dalio a jerk, fires him.

  • Manna? (Score:5, Insightful)

    by Daemonik ( 171801 ) on Saturday December 24, 2016 @08:30AM (#53547713) Homepage

    Funny, I thought Manna [marshallbrain.com] was supposed to start at the burger flippers. Oh well! They've already got these paranoid little hedge fund monkey's judging each other throughout the day, sounds like hell on earth. Couldn't happen to nicer slime bags.

  • ...all programmers were fired on the first run of the HR management software.
    • by raymorris ( 2726007 ) on Saturday December 24, 2016 @09:47AM (#53547881) Journal

      The programmers are creating a system that makes business decisions. That means one of two things:
      A) "The program" decides to give the programmers big raises
      B) The programmers are incompetent

      I know if *I* were programming such a system, the system would "know" that I'm extremely valuable.

      • by mark-t ( 151149 )

        I would think that A=B, in that case.... while most management may be replaced by the machines, it is highly unlikely that a suddenly much larger amount going to salary payouts would be unnoticed by the people who pay attention to the bottom line margins for very long. A smarter programmer would arrange for the program to give himself a smaller raise, one that it is less likely to be noticed by someone else. If, for example, the team designs the software such that it never recommends anyone currently on

        • But anyone outside the programming team would have to be programmers themselves, right? In which case, wouldn't they too be entitled to the same 'kickbacks'? Or are we talking about an external audit, like bringing in maybe a PwC to review everything?
  • Obvious (Score:4, Funny)

    by Carewolf ( 581105 ) on Saturday December 24, 2016 @09:10AM (#53547799) Homepage

    Robots and AI have always been taking the mentally easiest and least skill demanding jobs first. But where do they plan to find AI with the right connections?

    • Think of it as being interviewed by Eliza.

    • by m00sh ( 2538182 )

      Robots and AI have always been taking the mentally easiest and least skill demanding jobs first. But where do they plan to find AI with the right connections?

      That's why it won Jeopardy.

    • Robots and AI have always been taking the mentally easiest and least skill demanding jobs first.

      One of the first things that an AI accomplished was becoming a grandmaster at chess. Is that mentally easy?

      • by HiThere ( 15173 )

        It actually *is* mentally easy *IF* you have an infallible memory AND you can work basic logic AND your logic runs quite quickly.

        Admittedly, that would only qualify you as a very bottom level "grand master", but it would suffice to beat most masters. The games would be uninspired, but technically sound, and the defense could be essentially unbeatable. Just about all your games against master level players would be draws.

  • durable intent (Score:5, Insightful)

    by khallow ( 566160 ) on Saturday December 24, 2016 @09:21AM (#53547819)

    The goal is technology that would automate most of the firmâ(TM)s management. It would represent a culmination of Mr. Dalioâ(TM)s life work to build Bridgewater into an altar to radical opennessâ"and a place that can endure without him.

    At Bridgewater, most meetings are recorded, employees are expected to criticize one another continually, people are subject to frequent probes of their weaknesses, and personal performance is assessed on a host of data points, all under Mr. Dalioâ(TM)s gaze.

    Bridgewaterâ(TM)s new technology would enshrine his unorthodox management approach in a software system. It could dole out GPS-style directions for how staff members should spend every aspect of their days, down to whether an employee should make a particular phone call.

    I think the Wall Street story (here [google.com] gets you past the paywall once) is obsessing over the micromanagement side of the thing and missing the big picture.

    This is among the first examples of someone using AI to try to maintain strategic and organizational integrity of an organization after their death. While there's a good chance this just fails utterly (particularly with the obsession on micromanagement and dysfunctional business dynamics), it does lead to a potential problem or opportunity down the road when many of these things have been set up with conflicting interests. There have been many examples through history of powerful people trying to create an enduring legacy through creation and propagation of something throughout time. These endeavors often fail merely because successors have different interests and high levels of incompetency, leading eventually to dissolution of the thing.

    Here is a possibility to create something enduring, a machine capable of surviving long durations and implementing its creators' will long after their deaths. Here, the alleged goal is retention of a particular business culture, but who knows what else has been tossed in? There could be all sorts of covert purposes and priorities, some introduced by the patron and perhaps, some introduced by other parties?

    Then there's the matter of what happens in the distant future, if this approach turns out to be successful without a corresponding improvement in human longevity? Either it's the only one of its kind, and we have a build up of economic power not subject to the usual restrictions of human lifespan or we have multiple powerful parties in permanent conflict with each other.

    This need not be universally bad. For example, an AI could be set up to further environmentalism or poverty elimination goals just as easily as it could a particular business's interests.

    • Then there are the clever "de-automizers" who subscribe to the CaptainDork Third Corollary that:

      For every mother fucker out there with a computer, there's another mother fucker out there with a computer.

      And when Mr. Roboto expires, his legacy shit gets reformatted by another Mr. Roboto.

    • This need not be universally bad. For example, an AI could be set up to further environmentalism or poverty elimination goals just as easily as it could a particular business's interests.

      Don't forget to equip your environmental and poverty AI's with the three laws of robotics, otherwise it might go for the easy solution in both cases...

    • by geek ( 5680 )

      Steve Jobs did this by setting up a "college" on the Apple campus to teach his legacy after he died.

      • Re:durable intent (Score:4, Insightful)

        by zippthorne ( 748122 ) on Saturday December 24, 2016 @03:46PM (#53549371) Journal

        So, when can we see some graduates of this "college" start to work on Apple's deficiencies? They're sitting on the cusp of riding a managed decline into irrelevancy over the next decade or so. The only thing they seem to have going for them any more is that if you care at all for personal security, you can't afford to buy a device from pretty much any of their competitors, and they're fast trying to give that up as well.

        Do they really want to go back to the days when they had to beg Microsoft to keep them alive? Where they, too, can string along as an also-ran, kept on life-support by the dominate player to avoid anti-trust attention? 'Cause that's really working out well for AMD.

    • by hey! ( 33014 )

      I think this may not be quite so new as it seems. In a way, attempts to apply cybernetic control principles to management go waay back ("Management by Objectives 1954), even predating cybernetics itself ("Scientific Management" circa 1910).

      The thing is the evidence for the effectiveness of these systems have always been mixed. As with every kind of educational reform ever attempted, there were some remarkable success stories but in practice these saddled users with a rigid ideology and time-consuming ritu

      • The difference we're seeing now is that we're not talking about applying "mechanical" management principles to people, we're now beginning to glimpse a world in which the machines effectively integrate and compete with each other, where you're not going to have humans being managed with machines or by machines, but rather the machines being at least semi-autonomous, with a few humans with the authority to override them, much as how automation of industrial processes has been heading. The reality is that the

    • by m00sh ( 2538182 )

      The goal is technology that would automate most of the firmâ(TM)s management. It would represent a culmination of Mr. Dalioâ(TM)s life work to build Bridgewater into an altar to radical opennessâ"and a place that can endure without him. At Bridgewater, most meetings are recorded, employees are expected to criticize one another continually, people are subject to frequent probes of their weaknesses, and personal performance is assessed on a host of data points, all under Mr. Dalioâ(TM)s gaze. Bridgewaterâ(TM)s new technology would enshrine his unorthodox management approach in a software system. It could dole out GPS-style directions for how staff members should spend every aspect of their days, down to whether an employee should make a particular phone call.

      I think the Wall Street story (here [google.com] gets you past the paywall once) is obsessing over the micromanagement side of the thing and missing the big picture. This is among the first examples of someone using AI to try to maintain strategic and organizational integrity of an organization after their death. While there's a good chance this just fails utterly (particularly with the obsession on micromanagement and dysfunctional business dynamics), it does lead to a potential problem or opportunity down the road when many of these things have been set up with conflicting interests. There have been many examples through history of powerful people trying to create an enduring legacy through creation and propagation of something throughout time. These endeavors often fail merely because successors have different interests and high levels of incompetency, leading eventually to dissolution of the thing. Here is a possibility to create something enduring, a machine capable of surviving long durations and implementing its creators' will long after their deaths. Here, the alleged goal is retention of a particular business culture, but who knows what else has been tossed in? There could be all sorts of covert purposes and priorities, some introduced by the patron and perhaps, some introduced by other parties? Then there's the matter of what happens in the distant future, if this approach turns out to be successful without a corresponding improvement in human longevity? Either it's the only one of its kind, and we have a build up of economic power not subject to the usual restrictions of human lifespan or we have multiple powerful parties in permanent conflict with each other. This need not be universally bad. For example, an AI could be set up to further environmentalism or poverty elimination goals just as easily as it could a particular business's interests.

      In a lot of complex systems, the influence of the inputs and outputs are not linear and uncorrelated.

      AI setup for environmentalism and poverty elimination might influence other systems and those influences might be huge in the wrong sectors. It might attribute environmentalism and poverty elimination with population reduction, war etc.

      Just like the stock market, there is no real way to predict what will be successful and what will fail. As humans, we always have our 20/20 hindsight bias. As an exercise

    • They don't always fail.... Scientology is still going strong.
  • by Billly Gates ( 198444 ) on Saturday December 24, 2016 @09:26AM (#53547833) Journal

    If they got passed highschool and got educated and decided to better themselves they wouldn't have been replaced by automation. They have no one to blame but themselves as a hedge fund manager is a job for highschool kids that anyone can do. It was never meant to support a family

    • by Ol Olsoc ( 1175323 ) on Saturday December 24, 2016 @09:44AM (#53547867)

      If they got passed highschool and got educated and decided to better themselves they wouldn't have been replaced by automation. They have no one to blame but themselves as a hedge fund manager is a job for highschool kids that anyone can do. It was never meant to support a family

      Well done.

      • by raind ( 174356 )
        I once worked for a company whose motto was something like: We're not here to make you rich, we're here to keep you rich.

        What a scam they got going, It's a club - members only.
    • by ceoyoyo ( 59147 )

      You got your funny mod, but there have been several scientific papers showing hedge fund managers don't do any better than chance. There have been demonstrations where they are literally replaced by cats or monkeys throwing things.

      At least a high school dropout burger flipper flips something other than a coin.

  • Funny? (Score:4, Insightful)

    by waspleg ( 316038 ) on Saturday December 24, 2016 @09:30AM (#53547839) Journal

    this was posted on hacker news a couple days ago and I still have the tab open.

    In the comments you will find a link to this, which are Ray Dalio's "Principals" [principles.com] which were lauded on HN for some reason.

    I didn't make it that far reading them. There are 200 of them some with subsections. It seems like a lot of managerial jerking off from one.

  • The results should be interesting to say the least.

  • Just another soulless corporation. Can you imagine getting fired by a machine?
  • Once someone decides to start watering crops with water instead of Brawndo everyone will be unemployed.
  • ...is how the hell you develop a sustaining economy once you put all the humans out to pasture.

    Projects like this are interesting in the lab and classroom for now, but the reality of today is we have no fucking idea whatsoever how we're going to handle humans being literally unemployable.

    Not that the greedy elite creating this really gives a shit. If you thought the chasm between billionaires and reality was large now, just wait until this new model starts creating trillionaires.

    • by ceoyoyo ( 59147 ) on Saturday December 24, 2016 @01:01PM (#53548583)

      First step: stop believing in the nonsense you've been told about the economy. "The economy" will work better with automation. It always has.

      The problem is with distribution of wealth. Right now we have this quaint system developed during one of the nastier periods of human history where we give all of it to a few people and everyone else does what those people say in exchange for whatever the "bosses" think they deserve. You're absolutely right, we're going to have to come up with some better way.

      • First step: stop believing in the nonsense you've been told about the economy. "The economy" will work better with automation. It always has.

        The "economy" has always been fueled by humans who maintain the capacity to feed it, hence the main reason the Great Depression wasn't so "great" for anyone, including the economy. And our economy has always evolved to continue to create paths to employ humans to feed our economy. The next evolution is removing the human altogether. Hope that clarifies the impact of automation this time around, and how your "always has" theory quickly becomes an illusion.

        The problem is with distribution of wealth. Right now we have this quaint system developed during one of the nastier periods of human history where we give all of it to a few people and everyone else does what those people say in exchange for whatever the "bosses" think they deserve. You're absolutely right, we're going to have to come up with some better way.

        Greed created the slave. Greed created the 1%. G

  • This is just making our own Great Old Ones, and then enslaving ourselves too them. Fucking shoggoths all the way down.
  • We have had the technology to replace most management positions since I was a child in the 1970s. I once suggested we try it out in a series of endless meetings that were a Circle Jerk to Nowhere, but people thought I was joking: The Magic 8 Ball. The answers would be no less intelligent than our management team's - more intelligent in most cases, but would at least be consistent.
  • ...staff asked to grade each other throughout the day using a ratings system called "dots."

    FFS. People still think crap like this works? It will swiftly degenerate into alliances, feuds, and an arena mentality.

    See what Deming said [youtube.com]
  • It's sure nice to see the people at the high end of the financial scale being the first to go... Not something I expected, but hey, surprises now and then keep you sharp.

    "Hedge fund manager? Yeah, I replaced a few of them with perl scripts last summer..."

  • Hedge fund managers are good for shifting vast amounts of money to their own accounts, and little else.
  • Most "managers" I've met have been spreadsheet monkeys running things through formulas for other managers. Not much analytical skills and little if no human or leadership skills. Easily replaced by automation and good riddance when we do. The sooner the better.

  • First thing you do is fire all the hedge fund managers [marketwatch.com]. Their fees make them consistently the worst way to invest in stocks.

    Then you get rid of market index funds [economist.com]. The market index funds (DJIA, S&P500, etc) are weighted based on market capitalization. "Popular" stocks have a higher market cap, so tend to be over-represented in these funds. But a stock being popular means it has already experienced a value gain. It's less likely to appreciate more than the "unpopular" stocks.

    So now you've got
    • by ceoyoyo ( 59147 )

      That's not *quite* true. An efficient market isn't at all like gambling. It allocates resources where they are best used. Investing in that market means your money is used as efficiently as possible, earning you the best possible return. Trying to *beat* that market is gambling.

      Investors in an efficient market aren't gambling. Hedge fund managers aren't gambling either: they're fleecing the sheep by charging commissions for basically flipping coins.

  • by Anonymous Coward

    I got a weird call from a recruiter about a position at this place about 5 years ago. They had just finished a very profitable year, and I've been doing the type of mathematical modeling they find very interesting for over 15 years. What made the call weird was how much the recruiter felt the need to "explain" the company to me in our initial conversation.

    He knew he had a turkey on his hands and had obviously found it difficult to place savvy people at Bridgewater in the past. They ask that everyone read

  • ...who wants to ensure the company can run according to his vision even when he's not there...

    So he has admitted that he has abdicated his responsibility to have people in place in case he is unable to manage the company?

    • by SeaFox ( 739806 )

      ...who wants to ensure the company can run according to his vision even when he's not there...

      So he has admitted that he has abdicated his responsibility to have people in place in case he is unable to manage the company?

      Even better -- if the A.I. is designed to run the company according to his vision even when he's not there, why does he need to be kept on the payroll?

    • by HiThere ( 15173 )

      Actually, if you assume that the AI is a good implementation of his vision, that's not an unreasonable way to do the management. Unfortunately(?) the environment isn't static, and once his vision is fixed in code, it *is*. Perhaps only at a rather abstract level, if it's a really smart AI (which, I believe, is beyond the current state of the art), but fixed. So when the environment changes in an unexpected way, it will fail...or at least react in an unpredictable way.

  • Management is a much more finite decision tree than driving a car.
    • How to train an AI system to relentlessly avoiding traceability and accountability?
    • How to let it pretend interest in strategy analysis?
    • In short, how to let it reassure the outfit it works for that it should know its stuff but, when push comes to shove, it will be friggin' useless?

    Indeed in my frame of reference -the shop I currently work for- I see a few challenging requirements. Perhaps we should lower our expectations and come up with a system to automate people that actually do that what they were hire

  • Low hanging fruit...

  • >staff asked to grade each other throughout the day using a ratings system called "dots."
    Incentivizes socializing, charisma, elbow-rubbing more than anything related to productivity or success.

    I see them everywhere, but the worst offender of encouraging people to game things is our obsession with metrics and poorly-equated conclusions.

    Reduce people to a number and they will immediately look for the levers that manipulate it. Which is to say, surprise surprise, they won't immediately go out and do

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...