×
Businesses

IBM To Buy Software AG's Enterprise Integration Platforms For $2.3 Billion 11

An anonymous reader quotes a report from Reuters: IBM said on Monday that it would buy Software AG's enterprise integration platforms for 2.13 billion euros ($2.33 billion) to bolster its artificial intelligence and hybrid cloud offerings. IBM will acquire Software AG's StreamSets and webMethods platforms with available cash on hand, it said. The two units formed Software AG's so-called "Super Ipaas" business, which was launched in October.

The platforms provide application integration, application programming interface (API) management, and data integration among other uses. Software AG is majority owned by private equity firm Silver Lake, which currently owns 93.3% of shares in the German software company, following a takeover pursuit spanning several months. That deal valued the whole business at 2.6 billion euros ($2.84 billion). The transaction is subject to regulatory approvals and is expected to be completed in the second quarter of 2024.
"The opportunity to bring the StreamSets and webMethods teams together with IBM to innovate in building the future of hybrid cloud and next-generation AI solutions for the enterprise is uniquely compelling," Christian Lucas, chairman of the supervisory board of Software AG said in a statement.
United States

New York Joins IBM, Micron in $10 Billion Chip Research Complex (wsj.com) 17

New York has partnered with chip firms to build $10 billion semiconductor research site at University at Albany, featuring cutting-edge ASML equipment to develop most advanced chips. From a report: Once the machinery is installed, the project and its partners will begin work on next-generation chip manufacturing there, according to New York Gov. Kathy Hochul's office. The partners include tech giant IBM, memory manufacturer Micron and chip manufacturing equipment makers Applied Materials and Tokyo Electron.

The expansion could help New York's bid to be designated a research hub under last year's $53 billion Chips Act. That legislation included $11 billion for a National Semiconductor Technology Center to foster domestic chip research and development. Expanding domestic chip manufacturing and research has become a federal and state-level priority in recent years as concern grows in the U.S. over China's expanding grasp over the industry. Chips are increasingly seen as a crux of geopolitical power, underlying advanced weapons for militaries and sophisticated artificial-intelligence systems.

IBM

Can IBM's Watson Translate the World's 60-Year-Old Cobol Code? (pcmag.com) 120

"Every day, 3 trillion dollars worth of transactions are handled by a 64-year-old programming language that hardly anybody knows anymore," writes PC Magazine. But most school's don't teach the mainframe programming language COBOL any more, and "COBOL cowboys" are aging out of the workforce, with replacements in short supply.

"This is precisely the kind of problem that IBM thinks it can fix with AI." IBM's approach is fairly straightforward: Rather than relying exclusively on a limited pool of human programmers to solve the problem, it built a generative AI-powered code assistant (watsonx) that helps convert all that dusty old COBOL code to a more modern language, thereby saving coders countless hours of reprogramming. In extremely simplified terms, the process is similar to feeding an essay written in English into ChatGPT and asking it to translate certain paragraphs into Esperanto. It allows programmers to take a chunk of COBOL and enlist watsonx to transform it into Java.

But of course, it's not quite that simple in practice... After IBM and the customer have a thorough understanding of the application landscape, the data flow, and the existing dependencies, "we help them refactor their applications," says IBM's Vice President of Product Management, IT Automation, Keri Olson. "That is, breaking it down into smaller pieces, which the customer can selectively choose, at that point, to do the modernization from COBOL to Java." Skyla Loomis, IBM's Vice President of IBM Z Software adds, "But you have to remember that this is a developer assistant tool. It's AI assisted, but it still requires the developer. So yes, the developer is involved with the tooling and helping the customers select the services."

Once the partnership between man and machine is established, the AI steps in and says, 'Okay, I want to transform this portion of code. The developer may still need to perform some minor editing of the code that the AI provides, Loomis explains. "It might be 80 or 90 percent of what they need, but it still requires a couple of changes. It's a productivity enhancement — not a developer replacement type of activity."

The article quotes a skeptical Gartner Distinguished Vice President and Analyst, who notes that IBM "has no case studies, at this time, to validate its claims."
Supercomputing

Quantum Computer Sets Record For Largest Ever Number of 'Logical Quantum Bits' (newscientist.com) 16

An anonymous reader quotes a report from New Scientist: Another quantum computing record has been broken. A team has built a quantum computer with the largest ever number of so-called logical qubits (quantum bits). Unlike standard qubits, logical qubits are better able to carry out computations unmarred by errors, making the new device a potentially important step towards practical quantum computing. How complicated of a calculation a quantum computer can complete depends on the number of qubits it contains. Recently, IBM and California-based Atom Computing unveiled devices with more than 1000 qubits, nearly tripling the size of previously largest quantum computers. But the existence of these devices has not led to an immediate and dramatic increase in computing capability, because larger quantum computers often also make more errors.

To make a quantum computer that can correct its errors, researchers from the quantum computing start-up QuEra in Boston and several academics focused instead on increasing its number of logical qubits, which are groups of qubits that are connected to each other through quantum entanglement. In conventional computers, error-correction relies on keeping multiple redundant copies of information, but quantum information is fundamentally different and cannot be copied -- so researchers use entanglement to spread it across several qubits, which achieves a similar redundancy, says Dolev Bluvstein at Harvard University in Massachusetts who was part of the team. To make their quantum computer, the researchers started with several thousand rubidium atoms in an airless container. They then used forces from lasers and magnets to cool the atoms to temperatures close to absolute zero where their quantum properties are most prominent. Under these conditions, they could control the atoms' quantum states very precisely by again hitting them with lasers. Accordingly, they first created 280 qubits from the atoms and then went a step further by using another laser pulse to entangle groups of those – for instance 7 qubits at a time -- to make a logical qubit. By doing this, the researchers were able to make as many as 48 logical qubits at one time. This is more than 10 times the number of logical qubits that have ever been created before.

"It's a big deal to have that many logical qubits. A very remarkable result for any quantum computing platform" says Mark Saffman at the University of Wisconsin-Madison. He says that the new quantum computer greatly benefits from being made of atoms that are controlled by light because this kind of control is very efficient. QuEra's computer makes its qubits interact and exchange information by moving them closer to each other inside the computer with optical "tweezers" made of laser beams. In contrast, chip-based quantum computers, like those made by IBM and Google, must use multiple wires to control each qubit. Bluvstein and his colleagues implemented several computer operations, codes and algorithms on the new computer to test the logical qubits' performance. He says that though these tests were more preliminary than the calculations that quantum computers will eventually perform, the team already found that using logical qubits led to fewer errors than seen in quantum computers using physical qubits.
The research has been published in the journal Nature.
AI

Meta, IBM Create Industrywide AI Alliance To Share Technology (bloomberg.com) 6

Meta and IBM are joining more than 40 companies and organizations to create an industry group dedicated to open source artificial intelligence work, aiming to share technology and reduce risks. From a report: The coalition, called the AI Alliance, will focus on the responsible development of AI technology, including safety and security tools, according to a statement Tuesday. The group also will look to increase the number of open source AI models -- rather than the proprietary systems favored by some companies -- develop new hardware and team up with academic researchers.

Proponents of open source AI technology, which is made public by developers for others to use, see the approach as a more efficient way to cultivate the highly complex systems. Over the past few months, Meta has been releasing open source versions of its large language models, which are the foundation of AI chatbots.

IBM

IBM Claims Quantum Computing Research Milestone (ft.com) 33

Quantum computing is starting to fulfil its promise as a crucial scientific research tool, IBM researchers claim, as the US tech group attempts to quell fears that the technology will fail to match high hopes for it. From a report: The company is due to unveil 10 projects on Monday that point to the power of quantum calculation when twinned with established techniques such as conventional supercomputing, said Dario Gil, its head of research. "For the first time now we have large enough systems, capable enough systems, that you can do useful technical and scientific work with it," Gil said in an interview. The papers presented on Monday are the work of IBM and partners including the Los Alamos National Laboratory, University of California, Berkeley, and the University of Tokyo. They focus mainly on areas such as simulating quantum physics and solving problems in chemistry and materials science.

Expectations that quantum systems would by now be close to commercial uses prompted a wave of funding for the technology in recent years. But signs that business applications are further off than expected have led to warnings of a possible "quantum winter" of waning investor confidence and financial backing. IBM's announcements suggest the technology's main applications have not yet fully extended to the broad range of commercialisable computing tasks many in the field want to see. "It's going to take a while before we go from scientific value to, let's say, business value," said Jay Gambetta, IBM's vice-president of quantum. "But in my opinion the difference between research and commercialisation is getting tighter."

China

Huawei and Tencent Spearhead China's Hold on Cybersecurity Patents (nikkei.com) 28

China's presence is growing in cybersecurity technology, with companies such as Huawei and Tencent accounting for six of the top 10 global patent holdings in the sector as of August. From a report: Chinese companies have made headway in technological fields that affect economic security, according to industry insiders, as they focus on fostering their own tech amid the growing standoff between the U.S. and China. The rankings, compiled by Nikkei in cooperation with U.S. information services provider LexisNexis, are based on patents registered in 95 countries and regions, including Japan, the U.S., China and the European Union. Patent registrations were screened for the cybersecurity field using such factors as the international patent classification, with filings of the same patent in multiple countries counted as a single patent.

As of August, IBM led the rankings with 6,363 patents. Huawei Technologies came in second with 5,735 patents and Tencent Holdings placed third with 4,803. Other Chinese companies in the top 10 included financial services provider Ant Group in sixth with 3,922 patents, followed by power transmission company State Grid Corp. of China with 3,696, Alibaba Group Holding with 3,122 and sovereign wealth fund China Investment with 3,042. Patent applications filed by Chinese companies have increased since around 2018, when the U.S. began to impose full-scale export controls on Chinese high-tech companies. Compared with 10 years ago, IBM's patent holdings increased by a factor of 1.5. In contrast, holdings for Huawei and Tencent were 2.3 times and 13 times higher, respectively.

News

Martin Goetz, Who Received the First Software Patent, Dies at 93 35

Martin Goetz, who joined the computer industry in its infancy in the mid-1950s as a programmer working on Univac mainframes and who later received the first U.S. patent for software, died on Oct. 10 at his home in Brighton, Mass. He was 93. The New York Times: His daughter Karen Jacobs said the cause was leukemia. In 1968, nearly a decade after he and several other partners started the company Applied Data Research, Mr. Goetz received his patent, for data-sorting software for mainframes. It was major news in the industry: An article in Computerworld magazine bore the headline "First Patent Is Issued for Software, Full Implications Are Not Known." Until then, software had not been viewed as a patentable product, one that was bundled into hulking mainframes like those made by IBM. Ms. Jacobs said her father had patented his own software so that IBM could not copy it and put it on its machines.

"By 1968, I had been involved in arguing about the patentability of software for about three years," Mr. Goetz said in an oral history interview in 2002 for the University of Minnesota. "I knew at some point in time the patent office would recognize it." What Mr. Goetz called his "sorting system" is believed to have been the first software product to be sold commercially, and his success at securing a patent led him to become a vocal champion of patenting software. The programs that instruct computers on what to do, he said, were often as worthy of patents as the machines themselves. The issuance of Mr. Goetz's patent "helped managers, programmers and lawyers at young software firms feel as if they were forming an industry of their own -- one in which they were creating products that were potentially profitable and legally defensible as proprietary inventions," Gerardo Con Diaz, a professor of science and technology studies at the University of California, Davis, wrote in the 2019 book "Software Rights: How Patent Law Transformed Software Development."
Further reading, from Slashdot archive: Recipient of First Software Patent Defends Them (2009).
Open Source

Unless Open Source Evolves, HashiCorp CEO Predicts OSS-Free Silicon Valley (www.thestack.technology) 84

Slashdot reader Striek remembers Silicon Valley's long history of open source develoipment — and how HashiCorp "made the controversial decision to change licenses from the Mozilla Public License to MariaDB's Business Source Licesne. The key difference between these two licenses is that the BSL limits its grant to "non-production use".

HashiCorp's CEO is now predicting there would be âoeno more open source companies in Silicon Valleyâ unless the community rethinks how it protects innovation, reports The Stack: While open source advocates had slammed [HashiCorp's] license switch, CEO Dave McJannet described the reaction from its largest customers as "Great. Because you're a critical partner to us and we need you to be a big, big company." Indeed, he claimed that "A lot of the feedback was, 'we wished you had done that sooner'" — adding that the move had been discussed with the major cloud vendors ahead of the announcement. "Every vendor over the last three or four years that has reached any modicum of scale has come to the same conclusion," said McJannet. "It's just the realisation that the open source model has to evolve, given the incentives that are now in the market."

He claimed the historic model of foundations was broken, as they were dominated by legacy vendors. Citing the case of Hadoop, he said: "They're a way for big companies to protect themselves from innovation, by making sure that if Hadoop becomes popular, IBM can take it and sell it for less because they are part of that foundation." The evolution to putting open source products on GitHub had worked "really, really well" but once a project became popular, there was an incentive for "clone vendors to start taking that stuff." He claimed that "My phone started ringing materially after we made our announcement from every open source startup in Silicon Valley going 'I think this is the right model'."

He said the Linux Foundation's adoption of Open Tofu raised serious questions. "What does it say for the future of open source, if foundations will just take it and give it a home. That is tragic for open source innovation. I will tell you, if that were to happen, there'll be no more open source companies in Silicon Valley."

Hashicorp also announced a beta using generative AI to produce new module tests, and HCP Vault Radar, which scans code for secrets, personally identifiable information, dependency vulnerabilities, and non-inclusive language.
AI

'Mind-Blowing' IBM Chip Speeds Up AI (nature.com) 21

An anonymous reader shares a report: A brain-inspired computer chip that could supercharge artificial intelligence by working faster with much less power has been developed by researchers at IBM in San Jose, California. Their massive NorthPole processor chip eliminates the need to frequently access external memory, and so performs tasks such as image recognition faster than existing architectures do -- while consuming vastly less power.

"Its energy efficiency is just mind-blowing," says Damien Querlioz, a nanoelectronics researcher at the University of Paris-Saclay in Palaiseau. The work, published in Science, shows that computing and memory can be integrated on a large scale, he says. "I feel the paper will shake the common thinking in computer architecture." NorthPole runs neural networks: multi-layered arrays of simple computational units programmed to recognize patterns in data. A bottom layer takes in data, such as the pixels in an image; each successive layer detects patterns of increasing complexity and passes information on to the next layer. The top layer produces an output that, for example, can express how likely an image is to contain a cat, a car or other object

IBM

IBM CEO in Damage Control Mode After AI Job Loss Comments (itpro.com) 51

IBM CEO Arvind Krishna appears to be in a state of damage control following recent controversial comments on AI-related job losses. From a report: Speaking at an event in the US this week, Krishna said IBM has no intention of laying off tech staff, such as developers or programmers, and instead plans to ramp up hiring for roles in these areas. "I don't intend to get rid of a single one," he said. "I'll get more." Krishna added that the company aims to increase the number of software engineering and sales staff over the next four years to accommodate for its heightened focus on generative AI. Instead, the hammer will fall largely on staff working in back-office operations, aligning closely with what we've heard previously from the exec.

Earlier this year, IBM announced plans to cut nearly 8,000 staff working in positions spanning human resources in a bid to automate roles. The move means that anywhere up to 7,800 jobs at the tech giant's HR division could be cut, equivalent to around 30% of the overall workforce in the unit. IBM also said at the time that it would halt hiring for roles in the division on account of positions being automated.

Krishna has been among the most outspoken big tech executives on the topic of AI job losses in recent months. While industry figureheads have repeatedly shirked the topic, Krishna, to his credit, has been candid on the subject. In an interview with CNBC in August, Krishna suggested "we should all feel better" about the influx of generative AI tools, much to the ire of critics worried about its impact on the labor market. Krishna also told the broadcaster that organizations can deliver marked improvements to productivity through generative AI, but that will come at the expense of human roles.

Android

Lenovo To Offer Android PCs, Starting With an All-In-One That Can Pack a Core i9 (theregister.com) 25

Simon Sharwood writes via The Register: The Chinese manufacturer that took over IBM's PC business announced on Thursday that it's teamed with an outfit named Esper that specializes in custom cuts of Android, plus device management offerings. Android is most commonly used in handheld devices. Lenovo's taking it in an entirely different direction by making the ThinkCentre M70a: a desktop all-in-one.

The first fruit of the collaboration with Esper, the ThinkCentre M70a boasts a 21 -- inch touch screen and offers a choice of 12th-gen Intel core CPUs from the Core i3 to the almost workstation-grade Core i9, at prices from $889 to beyond $1250. What could you do with Android on a Corei9, plus the maximum 16GB DDR4 3200MHz and 512GB PCIe SSD Lenovo's machines allow? Almost anything -- but Lenovo thinks its Android effort will first be appreciated by customers in the retail, hospitality, and healthcare industries. Esper pitches its wares as ideal for point-of-sale systems, kiosks, and digital signage -- environments where users don't need to access diverse apps but do need a machine that reliably boots into custom environments.

Lenovo's not just doing desktop PCs. The number one PC maker by market share has promised it will also ship Esper's wares on the small form factor ThinkCentre M70q -- a machine designed to be bolted to the back of monitors. The ThinkEdge SE30 -- a ruggedized and fanless edge client -- will also have an Android option. So will the ThinkCentre M90n-1 IoT [PDF] -- another rugged client for edge applications.

AI

Musk Warns Senators About AI Threat, While Gates Says the Technology Could Target World Hunger (wsj.com) 99

Elon Musk, Bill Gates, Mark Zuckerberg and other technology heavyweights debated the possibilities and risks of artificial intelligence Wednesday in a closed-door meeting with more than 60 U.S. Senators who are contemplating legislation to regulate the technology. WSJ: Musk, the CEO of Tesla and owner of X (formerly Twitter), warned about what he views as AI's potential to threaten humanity, according to a participant. Microsoft founder Gates said the technology could help address world hunger, said Sen. Chuck Schumer (D., N.Y.), who convened the session. Other speakers included Facebook founder Zuckerberg and the CEOs of Google, Microsoft, Nvidia and IBM, along with union leaders. Schumer at one point asked the guests if they agreed that the government needed to play a role in regulating artificial intelligence. Everyone present raised their hands, Schumer said during a break in the day-long session.

Despite that consensus -- and Schumer's vow to move toward passing legislation within months -- the meeting also laid bare some of the tension points ahead. One debate centered on the practice of making certain AI programs "open source," or available for the public to download and modify. Some in the room raised concerns about the practice, which has the potential to put powerful AI systems in the hands of bad actors, according to one participant. But Zuckerberg, whose company Meta Platforms has released powerful open source models, defended the practice. He told Senators in his opening statement that open source "democratizes access to these tools, and that helps level the playing field and foster innovation for people and businesses," according to excerpts released by the company. Another point of tension related to workers who see AI as a potential threat to their jobs. Sen. Maria Cantwell (D., Wash.) recounted a moment where the head of the Writers Guild of America West, Meredith Stiehm, described the views of members who are on strike seeking a new contract with Hollywood studios in part to address those fears.

AI

Adobe, Others Join White House's Voluntary Commitments on AI (reuters.com) 13

Adobe, IBM, Nvidia and five other firms have signed President Joe Biden's voluntary commitments governing artificial intelligence, which requires steps such as watermarking AI-generated content, the White House said. From a report: The original commitments, which were announced in July, were aimed at ensuring that AI's considerable power was not used for destructive purposes. Google, OpenAI and OpenAI partner Microsoft signed onto the commitments in July.
Graphics

Hobbyist Builds HDMI ISA Graphics Card For Vintage PCs By Improving Graphics Gremlin (yeokhengmeng.com) 60

Earlier this year, Singapore-based embedded security researcher yeokm1 built a ChatGPT client for MS-DOS.

Now they're back with a new project: HDMI is a relatively modern video connector we take for granted on modern PCs and monitors. Now vintage PCs can join in the fun too with a native connection to modern HDMI monitors without any additional adapter.

Two years ago, I learned of an open-source project called Graphics Gremlin by Eric Schlaepfer who runs the website Tubetime.us. It is an 8-bit ISA graphics card that supports display standards like Color Graphics Adapter (CGA) and Monochrome Display Adapter (MDA). CGA and MDA are display standards used by older IBM(-compatible) PCs in the 1980s. The frequencies and connectors used by CGA and MDA are no longer supported by modern monitors hence it is difficult for older PCs of the 1980s era to have modern displays connected to them without external adapters. Graphics Gremline addresses this problem by using techniques like scan doubling (for CGA) and increasing the vertical refresh rate (for MDA) then outputing to a relatively newer but still old VGA port.

I fabricated and assembled the design then installed it into my IBM5155... I decided to modify the Graphics Gremlin design so it can connect natively to an external HDMI monitor and service the internal Composite-based CRT at the same time.

The post concludes triumphantly with a photo of their IBM 5155 running the CGA Compatibility Tester displaying the color palette.
IBM

ArcaOS 5.1.0 (OEM OS/2 Warp Operating System) Now Available (arcanoae.com) 46

Slashdot reader martiniturbide writes: ArcaOS 5.1.0 is an OEM distribution of IBM's discontinued OS/2 Warp operating system. This new version of ArcaOS offers UEFI compatibility allowing it to run in modern x86 hardware and also includes the ability to install to GPT-based disk layouts.

At OS2World the OS/2 community has been called upon to report supported hardware, open source any OS/2 software, make public as much OS/2 documentation as possible and post the important platform links. OS2World insists that open source has helped OS/2 in the past years and it is time to look under the hood to try to clone internal components like Control Program, Presentation Manager, SOM and Workplace Shell.

Government

IBM Returns To the Facial Recognition Market 17

During the Black Lives Matter protests in 2020, IBM announced that it would no longer offer "general purpose" facial recognition technology due to concerns about racial profiling, mass surveillance, and other human rights violations. Now, according to The Verge and Liberty Investigates, "IBM signed a $69.8 million contract with the British government to develop a national biometrics platform that will offer a facial recognition function to immigration and law enforcement officials." From the report: A contract notice for the Home Office Biometrics Matcher Platform outlines how the project initially involves developing a fingerprint matching capability, while later stages introduce facial recognition for immigration purposes -- described as "an enabler for strategic facial matching for law enforcement." The final stage of the project is described as delivery of a "facial matching for law enforcement use-case." The platform will allow photos of individuals to be matched against images stored on a database -- what is sometimes known as a "one-to-many" matching system. In September 2020, IBM described such "one-to-many" matching systems as "the type of facial recognition technology most likely to be used for mass surveillance, racial profiling, or other violations of human rights."

IBM spokesman Imtiaz Mufti denied that its work on the contract was in conflict with its 2020 commitments. "IBM no longer offers general-purpose facial recognition and, consistent with our 2020 commitment, does not support the use of facial recognition for mass surveillance, racial profiling, or other human rights violations," he said. "The Home Office Biometrics Matcher Platform and associated Services contract is not used in mass surveillance. It supports police and immigration services in identifying suspects against a database of fingerprint and photo data. It is not capable of video ingest, which would typically be needed to support face-in-a-crowd biometric usage."

Human rights campaigners, however, said IBM's work on the project is incompatible with its 2020 commitments. Kojo Kyerewaa of Black Lives Matter UK said: "IBM has shown itself willing to step over the body and memory of George Floyd to chase a Home Office contract. This won't be forgotten." Matt Mahmoudi, PhD, tech researcher at Amnesty International, said: "The research across the globe is clear; there is no application of one-to-many facial recognition that is compatible with human rights law, and companies -- including IBM -- must therefore cease its sale, and honor their earlier statements to sunset these tools, even and especially in the context of law and immigration enforcement where the rights implications are compounding."
Operating Systems

FreeBSD Can Now Boot in 25 Milliseconds (theregister.com) 77

Replacing a sort algorithm in the FreeBSD kernel has improved its boot speed by a factor of 100 or more... and although it's aimed at a micro-VM, the gains should benefit everyone. From a report: MicroVMs are a hot area of technology R&D in the last half decade or so. The core idea is a re-invention of some of concepts and technology that IBM invented along with the hypervisor in the 1960s: designing OSes specifically to run as guests under another OS. This means building the OS specifically to run inside a VM, and to talk to resources provided by a specific hypervisor rather than to fake hardware.

This means that the guest OS needs next to no support for real hardware, just VirtIO drivers which talk directly to facilities provided by the host hypervisor. In turn, the hypervisor doesn't have to provide an emulated PCI bus, emulated power management, emulated graphics card, emulated network interface cards, and so on. The result is that the hypervisor itself can be much smaller and simpler. The result of ruthlessly chopping down both the hypervisor, and the OS that runs inside it, is that both ends can be much smaller and simpler. That means that VMs can use much fewer resources, and start up much quicker.

IT

Citizen Suspends Sales of Its Latest Smartwatch (theverge.com) 18

Citizen is temporarily suspending sales of its second-gen CZ Smart watch due to a "technical issue." From a report: The Wear OS watch, which launched in May, had a feature based on tech from IBM's Watson and NASA to track a person's alertness. It appears the decision stems from negative experiences from reviewers. Michael Fisher -- better known as MrMobile on YouTube -- noted that Citizen said it would suspend sales after he had reached out to the company about the watch's many issues. That was corroborated by a Wired story, in which reviewer Julian Chokkattu also detailed several bugs, like laggy screens, bad battery life, inaccurate tracking, and watchfaces that can't even tell the correct time.
Java

IBM Says Its Generative AI Tool Can Convert Old COBOL Code To Java (theregister.com) 108

IBM is introducing the watsonx Code Assistant for Z, a tool that uses generative AI to translate COBOL code to Java. This tool is set to be available in Q4 2023 and aims to speed up the translation of COBOL to Java on IBM's Z mainframes. The Register reports: According to IBM, there are billions of lines of COBOL code out there as potential candidates for modernization (a report last year estimated the total figure at 775-850 billion lines). For this reason, the generative AI features in watsonx Code Assistant for Z are intended to help developers to assess and determine the code most in need of modernization, allowing them to more speedily update large applications and focus on critical tasks.

IBM wants to provide tooling for each step of the modernization process, starting with its Application Discovery and Delivery Intelligence (ADDI) inventory and analysis tool. Other steps include refactoring business services in COBOL, transforming the code to Java code, and then validating the resulting outcome with the aid of automated testing. The resulting Java code emitted by watsonx Code Assistant for Z will be object-oriented, but will still interoperate with the rest of the COBOL application IBM claimed, as well as with key services such as CICS, IMS, DB2, and other z/OS runtimes.

Slashdot Top Deals