Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
United States Government Power Security

US Scrambles to Keep Fuel Flowing After Pipeline Cyberattack. Russian Cybercriminals Suspected (bbc.com) 239

A ransomware attack affecting a pipeline that supplies 45% of the fuel supplies for the Eastern U.S. has now led U.S. president Biden to declare a regional emergency providing "regulatory relief" to expand fuel delivery by other routes.

Axios reports: Friday night's cyberattack is "the most significant, successful attack on energy infrastructure" known to have occurred in the U.S., notes energy researcher Amy Myers Jaffe, per Politico. It follows other significant cyberattacks on the federal government and U.S. companies in recent months... 5,500 miles of pipeline have been shut down in response to the attack.
The BBC reports: Experts say fuel prices are likely to rise 2-3% on Monday, but the impact will be far worse if it goes on for much longer... Colonial Pipeline said it is working with law enforcement, cyber-security experts and the Department of Energy to restore service. On Sunday evening it said that although its four mainlines remain offline, some smaller lateral lines between terminals and delivery points are now operational...

Independent oil market analyst Gaurav Sharma told the BBC there is a lot of fuel now stranded at refineries in Texas. "Unless they sort it out by Tuesday, they're in big trouble," said Sharma. "The first areas to be impacted would be Atlanta and Tennessee, then the domino effect goes up to New York..." The temporary waiver issued by the Department of Transportation enables oil products to be shipped in tankers up to New York, but this would not be anywhere near enough to match the pipeline's capacity, Mr Sharma warned.

UPDATE (5/10): "On Monday, U.S. officials sought to soothe concerns about price spikes or damage to the economy by stressing that the fuel supply had so far not been disrupted," reports the Associated Press, "and the company said it was working toward 'substantially restoring operational service' by the weekend."

CNN reports that a criminal group originating from Russia named DarkSide "is believed to be responsible for a ransomware cyberattack on the Colonial Pipeline, according to a former senior cyber official. DarkSide typically targets non-Russian speaking countries, the source said... Bloomberg and The Washington Post have also reported on DarkSide's purported involvement in the cyberattack..."

If so, NBC News adds some sobering thoughts: Although Russian hackers often freelance for the Kremlin, early indications suggest this was a criminal scheme — not an attack by a nation state, the sources said. But the fact that Colonial had to shut down the country's largest gasoline pipeline underscores just how vulnerable American's cyber infrastructure is to both criminals and national adversaries, such as Russia, China and Iran, experts say. "This could be the most impactful ransomware attack in history, a cyber disaster turning into a real-world catastrophe," said Andrew Rubin, CEO and co-founder of Illumio, a cyber security firm...

If the culprit turns out to be a Russian criminal group, it will underscore that Russia gives free reign to criminal hackers who target the West, said Dmitri Alperovitch, co-founder of the cyber firm CrowdStrike and now executive chairman of a think tank, the Silverado Policy Accelerator. "Whether they work for the state or not is increasingly irrelevant, given Russia's obvious policy of harboring and tolerating cyber crime," he said.

Citing multiple sources, the BBC reports that DarkSide "infiltrated Colonial's network on Thursday and took almost 100GB of data hostage. After seizing the data, the hackers locked the data on some computers and servers, demanding a ransom on Friday. If it is not paid, they are threatening to leak it onto the internet... "

The BBC also shares some thoughts from Digital Shadows, a London-based cyber-security firm that tracks global cyber-criminal groups to help enterprises limit their exposure online: Digital Shadows thinks the Colonial Pipeline cyber-attack has come about due to the coronavirus pandemic — the rise of engineers remotely accessing control systems for the pipeline from home. James Chappell, co-founder and chief innovation officer at Digital Shadows, believes DarkSide bought account login details relating to remote desktop software like TeamViewer and Microsoft Remote Desktop.

He says it is possible for anyone to look up the login portals for computers connected to the internet on search engines like Shodan, and then "have-a-go" hackers just keep trying usernames and passwords until they get some to work.

"We're seeing a lot of victims now, this is seriously a big problem now," said Mr Chappell.

This discussion has been archived. No new comments can be posted.

US Scrambles to Keep Fuel Flowing After Pipeline Cyberattack. Russian Cybercriminals Suspected

Comments Filter:
  • by Gravis Zero ( 934156 ) on Monday May 10, 2021 @12:12AM (#61367706)

    People seem to get confused a lot when it comes to companies being slammed by ransomware. Corporations that get ransomware'd are not the victims here, they are in fact the perpetrators. What they did was neglect security to such an extent that they have no backup plan. Their systems could not be restored quickly from backups and they didn't put money into ensuring the software they use was bulletproof. This kind of attack is inevitability and they did nothing to prepare for it.

    • Re: (Score:2, Insightful)

      by Mal-2 ( 675116 )

      We don't know that they did nothing. Clearly they didn't do enough, but that doesn't prove negligence. It does suggest it, but that's the old saw about incompetence rather than malice.

      • by bloodhawk ( 813939 ) on Monday May 10, 2021 @12:59AM (#61367804)
        yes it really does prove negligence, being hit doesn't, not being able to recover rapidly after being hit DOES.
        • by gweihir ( 88907 )

          Being hit does indicate negligence and should be investigated. Being unable to recover does indicate negligence but does not prove it. It should be investigated as well, though.

        • by Entrope ( 68843 )

          They probably have a disaster recovery plan. But almost always, they would only be able to restore to their vulnerable state. Until they figure out how the attackers got in, and how to plug that hole, just restoring a backup won't fix the problem.

          • by bjwest ( 14070 )

            Until they figure out how the attackers got in, and how to plug that hole, just restoring a backup won't fix the problem.

            The attackers got in because this critical infrastructure system was connected to the internet where anyone anywhere in the world could attack it at their leisure. Remove that vulnerability and you eliminate 99% or more of the attack vectors.

      • by 1s44c ( 552956 ) on Monday May 10, 2021 @07:14AM (#61368424)

        Incompetence is a choice. It's the choice to not invest in competence. It's a rational PHB decision to not spend money on competence and take it as bonuses instead. If the worst happens they worm their way out of responsibility but they keep their bonuses.

      • by ceoyoyo ( 59147 )

        Not doing enough is pretty much the definition of negligence.

    • Re: (Score:2, Informative)

      by Powercntrl ( 458442 )

      Or how about this: Don't connect critical systems to the internet.

      Why is that so hard to understand?

      • Why is that so hard to understand?

        It seems you've never looked at a pipeline before. If you did you'd understand it, or you'd just massively increase the cost of your pipeline to the point of being unviable. Mind you a pipeline that doesn't get built is likely not going to suffer from an attack either so maybe that's just some 4D chess you're playing there ;-)

    • by CaptQuark ( 2706165 ) on Monday May 10, 2021 @12:54AM (#61367796)

      What they did was neglect security to such an extent that they have no backup plan. Their systems could not be restored quickly from backups and they didn't put money into ensuring the software they use was bulletproof. This kind of attack is inevitability and they did nothing to prepare for it.

      None of those claims are substantiated by the press release. They could have perfectly adequate backups, but why restore to a point where you know there are vulnerabilities? Also, restoring from backups takes you to a prior state which might not be the best option. Changes in storage levels, the status of hundreds of valves, forensic information could all be lost by just restoring from the backups of three days past.

      Also, almost any remote access software is going to have a major weakness - the human element. The bad guys may have been testing for weak user/password combinations for weeks before they found a valid logon account. Two factor authentication can reduce this vulnerability, but even that has the potential for a vulnerability. As the many hack-a-thons have proven, even the best maintained software can have zero-day exploits. To suggest they were negligent in their security is not supported in the information we currently have.

      --

      • The bad guys may have been testing for weak user/password combinations for weeks before they found a valid logon account.

        I don't get this at all. On my company's systems my new passwords are checked for strength. If a password is weak, I get a pop-up saying: "Grow up and get a real password, kid!"

      • Rubbish. Corporate Continuity is a CEO responsibility. DR plans are not about DR but include corporate continuity - that is the restoration of BAU business as usual. DR plans always include things such as undiscovered compromise or needing other post restore adjustments. Therefore simply stopping the pipes WAS the plan? Lets see the risk plan, and I hope it did not reduce to pick up the phone and call for help.
      • by DarkOx ( 621550 )

        Also, restoring from backups takes you to a prior state which might not be the best option. Changes in storage levels, the status of hundreds of valves, forensic information could all be lost by just restoring from the backups of three days past.

        All of that kind of thing should be addressed in a competent DR plan - and maybe it is. Which might be why it takes time. Servers have to get imaged or storage arrays have to be swapped out to preserve the forensic data. There probably are complex restart procedures for industrial process data gathering and control that likely requires engineers onsite etc. It all takes time.

        But or our so smart slashdoter believes because that because they can click thru the Ubuntu installed and extract their tarball of /ho

      • Also, we don't know what physical limits there are on restarting service after an emergency shutdown. Possibly there are a bunch of physical checks and tests they have to do before restarting
      • To suggest they were negligent in their security is not supported in the information we currently have.

        Your facts were good but your conclusion is weak.

        No critical infrastructure's critical control systems should ever be connected to the internet in a way that enables write access. There are hardware read-only links that can be used for monitoring.

        The fact that an internet-based attack can shut down critical infrastructure frankly proves that their security is negligent.

        • by DarkOx ( 621550 )

          No critical infrastructure's critical control systems should ever be connected to the internet in a way that enables write access

          I am sorry but this dumb. The reality is remote access from where the engineers who can support these systems are actually at (which means internet) is pretty important. Something like a large forest fire might mean engineers etc can't even get to some kinds of industrial sites safely or might have to evacuate.

          Being able to do things like remotely shut valves on pipelines that might be in the path of such a catastrophe is a good thing! There might be some exceptions like managing that nuclear reactor etc,

    • This is much less about negligent security than it is about nothing can be perfectly secure. Remote access can be pretty darn secure, but many of the systems that were put in place during the early days of the lockdown were bootstrapped.

      I am a bit baffled by why exactly their pipeline systems were impacted by a data exfiltration attack though. Even a lazy remote access system should have sufficient logging to identify a breach on the SCADA systems.

    • and they didn't put money into ensuring the software they use was bulletproof.

      Do you have meteorite insurance? I mean you could got struck by one at any point. Why didn't you invest in this. Why leave yourself open to a risk rather than make something "bulletproof"? /s Okay sarcasm aside the fact that you think that you can completely secure against this kind of thing shows an incredible amount of ignorance. Defense in depth is the name of the game. Business continuity strategies are what should be invested in.

      The idea that just pouring a little more money into security makes this al

    • by gweihir ( 88907 )

      Indeed. Regular _independent_ security reviews and pen-tests are mandatory if you provide a critical service. Apparently they had nothing of that. The whole things sounds like some amateurs or low-skill criminals got in easily. Also, how can somebody exfiltrate 100GB of data and nobody notices?

    • Mod up. See SSID and BSSID, WiGLE and Google Maps. Probably most of these are still unprotected, and advertising their vulnerabilities. Get them to produce their risk plan, and manual recovery times, and who signed off on it. By law, they are supposed to have up-to-date risk plans, and after solarwinds and similar attacks, everyone should have been on red alert. BTW it appears their backup/Restore plan was also deficient, and some hints that intrusion logs may have been pregnant with wild login guesses.One
    • At first, I thought I disagreed with your comment. However, the more I thought about it, the more I realize that you are correct.

      As sysadmins of servers that are not even close to critical infrastructure, we receive non-stop 24-7 attacks over the network.

      Worse, the intimation here is that a country with national infrastructure that is silly enough to architect itself into cyber vulnerability, combined with lax security policy that makes it possible for itself to be ransomwared into shutdown (which by itself

      • Let me explain BSSID's and WiGLE and SSID's. Suppose you work in a secure facility, or rather a facility that is a critical bit of infrastructure. SSID's tell you lots, if you map them back to the plant. When the engineer goes home, its location is logged, and the BSSID or Mac address tells you a lot about that laptop. Two big companies allow you to search... You could really design some clever scenarios - even script kiddies. Now google your own SSID and gasp. An attack is inevitability. The quality of t
    • by Ogive17 ( 691899 )
      That's quite an ignorant take on the situation.

      Humans are often the weak link. The company could have done everything right only to be done in by a random employee to was phished.

      Backups are also not instantaneous. I learned quite a bit when my employer was hit last year. It took a few days to get the most critical systems back online, remaining network drives were down for a couple weeks. Any infected PC had to be completely wiped. Can't just flip a switch and be operating again.
    • It's possible they had an elaborate plan but it didn't work right.

      I know I've seen most all organizations struggle to get this right. It's colossally expensive and there is constant financial pressure to not spend money on idle resources, the kind almost necessary for credible DR/recovery plans.

      Even the ones that do make a big effort to at least try struggle with actually making it work, a problem compounded by organizations' lack of willingness to endure downtime on dry runs or other ongoing DR practice to

      • by DarkOx ( 621550 )

        This is all spot on. The other truth about enterprise DR is you may have plan, you may have run books, you might have strategic assets - it all seems solid right up until you get punched in the face.

        I remember the great east cost power outage. We had generators for our data center and stand by fuel delivery contracts from multiple vendors. - Guess what it turned out when you have power over that large a region everyone wanted diesel and we could not get it. Our standby site - ~100 miles away - no power ther

    • Yeah... blame that victim! It's not like they buy control systems from third parties. They should just spend more to buy the magical (and somewhat inconveniently non-existent) hardware that has no vulnerabilities!

      Capitalism will save us all if we just spend more money!

    • by ceoyoyo ( 59147 )

      Governments should really be creating intelligence units to hack their own infrastructure and uncover this kind of negligence. Whatever the gang wants in ransom it's probably a deal, but it would be better if the money didn't go to criminals.

    • I agree, in part. However let's take this a step further:

      The corporation that was attacked wasn't a victim, however their customers (who are US citizens and companies) are victims. This means those victims deserve not only compensation from the attackers but, should the attackers be state sponsored, that state. Since the state that (likely) sponsored the attack will be unwilling to pay up, it's up to the US government to respond in-kind with an extra dollop of F-U to the nation state playing games.

      I'm
  • I wouldn't be surprised if we eventually find out the remote access solutions were set up and configured by the petroleum engineers. After all, if they're knowledgeable about one technical subject, they must be knowledgeable about every other technical subject...

    Team Viewer? Seriously?

    • by evanh ( 627108 )

      Much more likely to be the MBAs directing all that. Often even chiding the engineers for not having used the conferencing software before.

    • by thegarbz ( 1787294 ) on Monday May 10, 2021 @02:51AM (#61368012)

      This is isn't the fault of engineers. This is the fault of management not segregating responsibilities between people who know what they are doing.

      We dinged a site for this during a consultancy visit recently. Big red mark on the report: You have no IT people in the team managing your control system. The site had no idea what we were talking about so we expressly had to spell it out for them: You have a bunch of process engineers who trained how to use vendor software, and how learnt what firewalls and security are from a 1 week online course.

      Hire the right people. Segregate responsibilities. That responsibilities lies with managers, not with people who don't know any better forced to work in areas outside their expertise.

      • by gweihir ( 88907 )

        Old story: Hiring cheap people or no people is usually very expensive in the long run when it comes to IT.

      • This is isn't the fault of engineers. This is the fault of management not segregating responsibilities between people who know what they are doing.

        We dinged a site for this during a consultancy visit recently. Big red mark on the report: You have no IT people in the team managing your control system. The site had no idea what we were talking about so we expressly had to spell it out for them: You have a bunch of process engineers who trained how to use vendor software, and how learnt what firewalls and security are from a 1 week online course.

        Hire the right people. Segregate responsibilities. That responsibilities lies with managers, not with people who don't know any better forced to work in areas outside their expertise.

        While what you say is true, it raises the question of how management is supposed to know that process engineers who say they've got the security thing under control are wrong. Just how much specialization is required is something that only experts know.

        In your example I think management did an excellent job: They hired some expert consultants to come look at their system and find out what they were doing wrong. Well, assuming they then acted on the expert recommendations. If they didn't hire IT staff -- i

        • While what you say is true, it raises the question of how management is supposed to know that process engineers who say they've got the security thing under control are wrong.

          Well two things:
          1) Sites which have the problem almost never ask the question, rather they dictated the "solution" (send em on a training course) which caused the problem in the first place.
          2) Management is paid to understand when to seek external advice. Management fundamentally doesn't understand most things which go on in detail. Trusting your own people on security is just poor management which is precisely why consultancies, assurance, standards, best practices, etc, etc, etc exist in the first place.

          T

    • by Entrope ( 68843 )

      Do you think there aren't chemical engineers who know cybersecurity? My wife is one. She started out with a specialization in control systems, and that general field recognized a long time ago that cybersecurity was important. Now she has switched to security entirely, and she's not the only one to have done so (especially after her first employer moved their R&D company to Texas). She's more familiar with the NIST SP 800 series than most of my CS and EE coworkers.

      • by DarkOx ( 621550 )

        Your wife sounds like the kind of person I'd want on my staff designing security policy around ICS.

        You are kind of making the grand parents point though. I don't think they were implying that someone with a chemical/process engineers background can't do cyber security but more that someone needs to make it their business to specialize in that, keep abreast of the issue learn standards etc. The person focused on what temperature the distillation column needs to be held at, and if reagents are being consumed

    • If you don't provide proper SSH or RDP access, you're just begging for your employees to open holes in your system with Team Viewer et al.

  • Sure VPN gateways can have exploits too, but VPN with smartcard/usb-tokens is the minimum to be expected. Maybe on short term without dedicated laptops if there's a mass influx of remote workers, but they should have intranet only company laptops ASAP (VM based separation will just confuse the normies).

  • pot, meet kettle (Score:4, Insightful)

    by Tom ( 822 ) on Monday May 10, 2021 @01:00AM (#61367808) Homepage Journal

    "Whether they work for the state or not is increasingly irrelevant, given Russia's obvious policy of harboring and tolerating cyber crime," he said.

    Says someone in the USA. The country that invented spam and is also in the top 10 sources of cyber attacks.

    • Re:pot, meet kettle (Score:5, Interesting)

      by kot-begemot-uk ( 6104030 ) on Monday May 10, 2021 @01:15AM (#61367836) Homepage
      Russian cybercrime long left Russia. As their own infrastructure recovered from the hell of the 90es, their tolerance for cyrbercriminals dropped. They now have a working bank system, etc and their powers that be take a very dim view of cyrbercriminals - it hits the pockets of the wrong people.

      Russian cybercrime is now elsewhere - The core is in Transnistria (moved there as early as the early 2000s) and Ukraine (both Ukraine proper and the rebel enclaves). There is significance presence in the Baltic states (corrupt law enforcement and excellent connectivity). Some things like f.e. bot farms are operated out of really weird locations like Turkmenistan. Some operations are even further afield - bot CNC can be found in some of the Pacific and Indian ocean island nations, etc.

      Proof in point - a sizeable chunk of the Solomon Islands network allocation is listed in various GeoIP databases as Russian. That's for a reason - it runs the Russian mob bullet proof hosting.

      • by Viol8 ( 599362 )

        "and their powers that be take a very dim view of cyrbercriminals - it hits the pockets of the wrong people. "

        You'd have to be a particularly cretinously stupid russian hacker to hack sites in your own country because when (not if) they caught you your lifespan going forward would probably be measured in minutes (Ooops, hacker stepping in front of gun. Me very sorry) But I doubt the putin regime cares too much if foreign non ally sites get blizted and in fact it may well have its tacit blessing.

        • by DarkOx ( 621550 )

          I doubt the putin regime cares too much if foreign non ally sites get blizted and in fact it may well have its tacit blessing.
          Flag as Inappropriate

          I would not count on that. Sanctions or no sanctions you really think those kleptocrats don't have international investments or international investments that have international investments.

          Don't miss understand I am not saying Putin and his favored domestic allies will never attack western interests or anything of the sort just that they are very unlikely to think willfully ignoring or actively condoning indiscriminate attacks by criminals serves their personal or Russia's national interests. They care about the specifics of the where, when and what. This isnt the 18th century and they are not issuing defacto letters marque to attack anything with a US flag on it.

          I would say a lot of this higher-end ransomware/cyber crime type of attacks where there is some evidence the victim was actively targeted are either state sponsor and coordinated or if it is purely criminal enterprise it is viewed as undesirable by the host regime even if they are not willing to commit the resources to put a stop to it.

    • by J-1000 ( 869558 )

      It's the 3rd most populous nation, so being 10th in cyber attacks means a lot of smaller countries are punching well above their weight.

  • If the cost of this downtime is high enough, perhaps the company will re-think its security, or specifically the money spent on securing things. Nothing is unhackable, but with money you can make it prohibitively expensive to hack. People expect high security of banks guarding millions, so it seems reasonable to have similar expectations for securing infrastructure with downtime costs in millions, right?

  • It might be time to get all those PDP-11's off the internet, and replace them with something modern. Maybe get some Pentium 4's and install XP on them...
    • by storkus ( 179708 )

      It might be time to get all those PDP-11's off the internet, and replace them with something modern. Maybe get some Pentium 4's and install XP on them...

      You didn't read correctly then: didn't you see they were running Team Viewer and M$ RDP? That's EXACTLY what the did!

    • Hey, I have fond memories of playing Star Trek on the PDP-11 during my high school days...

  • One would think that an operation this critical and big has both the funds and the skills to secure its IT. Apparently some "manager" did this on the cheap or not at all.

  • I can't tell from the article if it the control systems affected or only business systems?
  • Humans won't prioritize security unless they suffer for ignoring it.

    We need MORE such attacks to coerce building robust systems. Nothing else will work because human nature.

    BTW this pipeline worked fine before computers, which is a hint it does not require them. It should not be a goal to blindly infest every system with computers.

  • by oh_my_080980980 ( 773867 ) on Monday May 10, 2021 @01:07PM (#61369762)
    Colonial Pipeline can use this to distract from their major spill in North Carolina.

    The other thing is it appears Colonial intentionally took down their operational network down, over billing. Because they would not be able to invoice customers who receive fuel if their IT network is locked with ransomware, preventing them from being paid for fuel. https://twitter.com/KimZetter/... [twitter.com]

    Ain't capitalism grand...

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...