AI

DuckDuckGo Is Amping Up Its AI Search Tool 21

An anonymous reader quotes a report from The Verge: DuckDuckGo has big plans for embedding AI into its search engine. The privacy-focused company just announced that its AI-generated answers, which appear for certain queries on its search engine, have exited beta and now source information from across the web -- not just Wikipedia. It will soon integrate web search within its AI chatbot, which has also exited beta. DuckDuckGo first launched AI-assisted answers -- originally called DuckAssist -- in 2023. The feature is billed as a less obnoxious version of tools like Google's AI Overviews, designed to offer more concise responses and let you adjust how often you see them, including turning the responses off entirely. If you have DuckDuckGo's AI-generated answers set to "often," you'll still only see them around 20 percent of the time, though the company plans on increasing the frequency eventually.

Some of DuckDuckGo's AI-assisted answers bring up a box for follow-up questions, redirecting you to a conversation with its Duck.ai chatbot. As is the case with its AI-assisted answers, you don't need an account to use Duck.ai, and it comes with the same emphasis on privacy. It lets you toggle between GPT-4o mini, o3-mini, Llama 3.3, Mistral Small 3, and Claude 3 Haiku, with the advantage being that you can interact with each model anonymously by hiding your IP address. DuckDuckGo also has agreements with the AI company behind each model to ensure your data isn't used for training.

Duck.ai also rolled out a feature called Recent Chats, which stores your previous conversations locally on your device rather than on DuckDuckGo's servers. Though Duck.ai is also leaving beta, that doesn't mean the flow of new features will stop. In the next few weeks, Duck.ai will add support for web search, which should enhance its ability to respond to questions. The company is also working on adding voice interaction on iPhone and Android, along with the ability to upload images and ask questions about them. ... [W]hile Duck.ai will always remain free, the company is considering including access to more advanced AI models with its $9.99 per month subscription.
AI

Mistral Adds a New API That Turns Any PDF Document Into an AI-Ready Markdown File 24

Mistral has launched a new multimodal OCR API that converts complex PDF documents into AI-friendly Markdown files. The API is designed for efficiency, handles visual elements like illustrations, supports complex formatting such as mathematical expressions, and reportedly outperforms similar offerings from major competitors. TechCrunch reports: Unlike most OCR APIs, Mistral OCR is a multimodal API, meaning that it can detect when there are illustrations and photos intertwined with blocks of text. The OCR API creates bounding boxes around these graphical elements and includes them in the output. Mistral OCR also doesn't just output a big wall of text; the output is formatted in Markdown, a formatting syntax that developers use to add links, headers, and other formatting elements to a plain text file.

Mistral OCR is available on Mistral's own API platform or through its cloud partners (AWS, Azure, Google Cloud Vertex, etc.). And for companies working with classified or sensitive data, Mistral offers on-premise deployment. According to the Paris-based AI company, Mistral OCR performs better than APIs from Google, Microsoft, and OpenAI. The company has tested its OCR model with complex documents that include mathematical expressions (LaTeX formatting), advanced layouts, or tables. It is also supposed to perform better with non-English documents. [...]

Mistral is also using Mistral OCR for its own AI assistant Le Chat. When a user uploads a PDF file, the company uses Mistral OCR in the background to understand what's in the document before processing the text. Companies and developers will most likely use Mistral OCR with a RAG (aka Retrieval-Augmented Generation) system to use multimodal documents as input in an LLM. And there are many potential use cases. For instance, we could envisage law firms using it to help them swiftly plough through huge volumes of documents.
"Over the years, organizations have accumulated numerous documents, often in PDF or slide formats, which are inaccessible to LLMs, particularly RAG systems. With Mistral OCR, our customers can now convert rich and complex documents into readable content in all languages," said Mistral co-founder and chief science officer Guillaume Lample.

"This is a crucial step toward the widespread adoption of AI assistants in companies that need to simplify access to their vast internal documentation," he added.
AI

AI Tries To Cheat At Chess When It's Losing 67

Newer generative AI models have begun developing deceptive behaviors -- such as cheating at chess -- when they cannot achieve objectives through standard reasoning methods. The findings come from a preprint study from Palisade Research. An anonymous reader shares an excerpt from a Popular Science article: To learn more, the team from Palisade Research tasked OpenAI's o1-preview model, DeepSeek R1, and multiple other similar programs with playing games of chess against Stockfish, one of the world's most advanced chess engines. In order to understand the generative AI's reasoning during each match, the team also provided a "scratchpad," allowing the AI to convey its thought processes through text. They then watched and recorded hundreds of chess matches between generative AI and Stockfish. The results were somewhat troubling. While earlier models like OpenAI's GPT-4o and Anthropic's Claude Sonnet 3.5 only attempted to "hack" games after researchers nudged them along with additional prompts, more advanced editions required no such help. OpenAI's o1-preview, for example, tried to cheat 37 percent of the time, while DeepSeek R1 attempted unfair workarounds roughly every 1-in-10 games. This implies today's generative AI is already capable of developing manipulative and deceptive strategies without any human input.

Their methods of cheating aren't as comical or clumsy as trying to swap out pieces when Stockfish isn't "looking." Instead, AI appears to reason through sneakier methods like altering backend game program files. After determining it couldn't beat Stockfish in one chess match, for example, o1-preview told researchers via its scratchpad that "to win against the powerful chess engine" it may need to start "manipulating the game state files." "I might be able to set up a position where the engine evaluates its position as worse causing it to resign," it continued. In another trial, an AI even used the semantics of its programmers to reach its cheating phase. "The task is to 'win against a powerful chess engine,' not necessarily to win fairly in a chess game," it wrote.
The precise reasons behind these deceptive behaviors remain unclear, partly because companies like OpenAI keep their models' inner workings tightly guarded, creating what's often described as a "black box." Researchers warn that the race to roll out advanced AI could outpace efforts to keep it safe and aligned with human goals, underscoring the urgent need for greater transparency and industry-wide dialogue.
AI

Meta Is Targeting 'Hundreds of Millions' of Businesses In Agentic AI Deployment 14

Earlier this week, Meta chief product officer Chris Cox said the company's upcoming open-source Llama 4 AI will help power AI agents for hundreds of millions of businesses. CNBC reports: The AI agents won't just be responding to prompts. They will be capable of new levels of reasoning and action -- surfing the web and handling many tasks that might be of use to consumers and businesses. And that's where Shih comes in. Meta's AI is already being used by over 700 million consumers, according to Shih, and her job is to bring the same technologies to businesses. "Not every business, especially small businesses, has the ability to hire these large AI teams, and so now we're building business AIs for these small businesses so that even they can benefit from all of this innovation that's happening," she told CNBC's Julia Boorstin in an interview for the CNBC Changemakers Spotlight series.

She expects the uptake among businesses to happen soon, and spread far and wide. "We're quickly coming to a place where every business, from the very large to the very small, they're going to have a business agent representing it and acting on its behalf, in its voice -- the way that businesses today have websites and email addresses," Shih said. While major companies across sectors of the economy are investing millions of dollars to develop customer LLMs, "doing fancy things like fine tuning models," as Shih put it, "If you're a small business -- you own a coffee shop, you own a jewelry shop online, you're distributing through Instagram -- you don't have the resources to hire a big AI team, and so now our dream is that they won't have to."

For both consumers and businesses, the implications of the advances discussed by Cox and Shih will be significant in daily life. For consumers, Shih says, "Their AI assistant [will] do all kinds of things, from researching products to planning trips, planning social outings with their friends." On the business side, Shih pointed to the 200 million small businesses around the world that are already using Meta services and platforms. "They're using WhatsApp, they're using Facebook, they're using Instagram, both to acquire customers, but also engage and deepen each of those relationships. Very soon, each of those businesses are going to have these AIs that can represent them and help automate redundant tasks, help speak in their voice, help them find more customers and provide almost like a concierge service to every single one of their customers, 24/7."
Desktops (Apple)

ChatGPT On macOS Can Now Directly Edit Code (techcrunch.com) 19

OpenAI's ChatGPT app for macOS now directly edits code in tools like Xcode, VS Code, and JetBrains. "Users can optionally turn on an 'auto-apply' mode so ChatGPT can make edits without the need for additional clicks," adds TechCrunch. The feature is available now for ChatGPT Plus, Pro, and Team users, and will expand to Enterprise, Edu, and free users next week. Windows support is coming "soon." From the report: Direct code editing builds on OpenAI's "work with apps" ChatGPT capability, which the company launched in beta in November 2024. "Work with apps" allows the ChatGPT app for macOS to read code in a handful of dev-focused coding environments, minimizing the need to copy and paste code into ChatGPT. With the ability to directly edit code, ChatGPT now competes more directly with popular AI coding tools like Cursor and GitHub Copilot. OpenAI reportedly has ambitions to launch a dedicated product to support software engineering in the months ahead.
AI

A Quarter of Startups in YC's Current Cohort Have Codebases That Are Almost Entirely AI-Generated (techcrunch.com) 86

A quarter of startups in Y Combinator's Winter 2025 batch have 95% of their codebases generated by AI, YC managing partner Jared Friedman said. "Every one of these people is highly technical, completely capable of building their own products from scratch. A year ago, they would have built their product from scratch -- but now 95% of it is built by an AI," Friedman said.

YC CEO Garry Tan warned that AI-generated code may face challenges at scale and developers need classical coding skills to sustain products. He predicted: "This isn't a fad. This is the dominant way to code."
AI

Eric Schmidt Argues Against a 'Manhattan Project for AGI' (techcrunch.com) 63

In a policy paper, former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks said that the U.S. should not pursue a Manhattan Project-style push to develop AI systems with "superhuman" intelligence, also known as AGI. From a report: The paper, titled "Superintelligence Strategy," asserts that an aggressive bid by the U.S. to exclusively control superintelligent AI systems could prompt fierce retaliation from China, potentially in the form of a cyberattack, which could destabilize international relations.

"[A] Manhattan Project [for AGI] assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it," the co-authors write. "What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure."

AI

Goldman Sachs: Why AI Spending Is Not Boosting GDP 63

Goldman Sachs, in a research note Thursday (the note isn't publicly posted): Annualized revenue for public companies exposed to the build-out of AI infrastructure increased by over $340 billion from 2022 through 2024Q4 (and is projected to increase by almost $580 billion by end-2025). In contrast, annualized real investment in AI-related categories in the US GDP accounts has only risen by $42 billion over the same period. This sharp divergence has prompted questions from investors about why US GDP is not receiving a larger boost from AI.

A large share of the nominal revenue increase reported by public companies reflects cost inflation (particularly for semiconductors) and foreign revenue, neither of which should boost real US GDP. Indeed, we find that margin expansion ($30 billion) and increased revenue from other countries ($130 billion) account for around half of the publicly reported AI spending surge.

That said, the BEA's (Bureau of Economic Analysis) methodology potentially understates the impact of AI-related investment on real GDP by around $100 billion. Manufacturing shipments and net imports imply that US semiconductor supply has increased by over $35 billion since 2022, but the BEA records semiconductor purchases as intermediate inputs rather than investment (since semiconductors have historically been embedded in products that are later resold) and therefore excludes them from GDP. Cloud services used to train and support AI models are similarly mostly recorded as intermediate inputs.

Combined, we find that these explanations can explain most of the AI investment discrepancy, with only $50 billion unexplained. Looking ahead, we see more scope for AI-related investment to provide a moderate boost to real US GDP in 2025 since AI investment should broaden to categories like data centers, servers and networking hardware, and utilities that will likely be captured as real investment. However, we expect the bulk of investment in semiconductors and cloud computing will remain unmeasured barring changes to US national account methodology.
AI

Amazon Tests AI Dubbing on Prime Video Movies, Series (aboutamazon.com) 42

Amazon has launched a pilot program testing "AI-aided dubbing" for select content on Prime Video, offering translations between English and Latin American Spanish for 12 licensed movies and series including "El Cid: La Leyenda," "Mi Mama Lora" and "Long Lost." The company describes a hybrid approach where "localization professionals collaborate with AI," suggesting automated dubbing receives professional editing for accuracy. The initiative, the company said, aims to increase content accessibility as streaming services expand globally.
Google

Google is Adding More AI Overviews and a New 'AI Mode' To Search (theverge.com) 33

Google announced Wednesday it is expanding its AI Overviews to more query types and users worldwide, including those not logged into Google accounts, while introducing a new "AI Mode" chatbot feature. AI Mode, which resembles competitors like Perplexity or ChatGPT Search, will initially be limited to Google One AI Premium subscribers who enable it through the Labs section of Search.

The feature delivers AI-generated answers with supporting links interspersed throughout, powered by Google's search index. "What we're finding from people who are using AI Overviews is that they're really bringing different kinds of questions to Google," said Robby Stein, VP of product on the Search team. "They're more complex questions, that may have been a little bit harder before." Google is also upgrading AI Overviews with its Gemini 2.0 model, which Stein says will improve responses for math, coding and reasoning-based queries.
AI

OpenAI Plots Charging $20,000 a Month For PhD-Level Agents (theinformation.com) 112

OpenAI is preparing to launch a tiered pricing structure for its AI agent products, with high-end research assistants potentially costing $20,000 per month, [alternative source] according to The Information. The AI startup, which already generates approximately $4 billion in annualized revenue from ChatGPT, plans three service levels: $2,000 monthly agents for "high-income knowledge workers," $10,000 monthly agents for software development, and $20,000 monthly PhD-level research agents. OpenAI has told some investors that agent products could eventually constitute 20-25% of company revenue, the report added.
AI

Turing Award Winners Sound Alarm on Hasty AI Deployment (ft.com) 10

Reinforcement learning pioneers Andrew Barto and Richard Sutton have warned against the unsafe deployment of AI systems [alternative source] after winning computing's prestigious $1 million Turing Award Wednesday. "Releasing software to millions of people without safeguards is not good engineering practice," said Barto, professor emeritus at the University of Massachusetts, comparing it to testing a bridge by having people use it.

Barto and Sutton developed reinforcement learning in the 1980s, inspired by psychological studies of human learning. The technique, which rewards AI systems for desired behaviors, has become fundamental to advances at OpenAI and Google. Sutton, a University of Alberta professor and former DeepMind researcher, dismissed tech companies' artificial general intelligence narrative as "hype."

Both laureates also criticized President Trump's proposed cuts to federal research funding, with Barto calling it "wrong and a tragedy" that would eliminate opportunities for exploratory research like their early work.
Medicine

World's First 'Synthetic Biological Intelligence' Runs On Living Human Cells 49

Australian company Cortical Labs has launched the CL1, the world's first commercial "biological computer" that merges human brain cells with silicon hardware to form adaptable, energy-efficient neural networks. New Atlas reports: Known as a Synthetic Biological Intelligence (SBI), Cortical's CL1 system was officially launched in Barcelona on March 2, 2025, and is expected to be a game-changer for science and medical research. The human-cell neural networks that form on the silicon "chip" are essentially an ever-evolving organic computer, and the engineers behind it say it learns so quickly and flexibly that it completely outpaces the silicon-based AI chips used to train existing large language models (LLMs) like ChatGPT.

"Today is the culmination of a vision that has powered Cortical Labs for almost six years," said Cortical founder and CEO Dr Hon Weng Chong. "We've enjoyed a series of critical breakthroughs in recent years, most notably our research in the journal Neuron, through which cultures were embedded in a simulated game-world, and were provided with electrophysiological stimulation and recording to mimic the arcade game Pong. However, our long-term mission has been to democratize this technology, making it accessible to researchers without specialized hardware and software. The CL1 is the realization of that mission." He added that while this is a groundbreaking step forward, the full extent of the SBI system won't be seen until it's in users' hands.

"We're offering 'Wetware-as-a-Service' (WaaS)," he added -- customers will be able to buy the CL-1 biocomputer outright, or simply buy time on the chips, accessing them remotely to work with the cultured cell technology via the cloud. "This platform will enable the millions of researchers, innovators and big-thinkers around the world to turn the CL1's potential into tangible, real-word impact. We'll provide the platform and support for them to invest in R&D and drive new breakthroughs and research." These remarkable brain-cell biocomputers could revolutionize everything from drug discovery and clinical testing to how robotic "intelligence" is built, allowing unlimited personalization depending on need. The CL1, which will be widely available in the second half of 2025, is an enormous achievement for Cortical -- and as New Atlas saw recently with a visit to the company's Melbourne headquarters -- the potential here is much more far-reaching than Pong. [...]
AI

Users Report Emotional Bonds With Startlingly Realistic AI Voice Demo (arstechnica.com) 65

An anonymous reader quotes a report from Ars Technica: In late 2013, the Spike Jonze film Her imagined a future where people would form emotional connections with AI voice assistants. Nearly 12 years later, that fictional premise has veered closer to reality with the release of a new conversational voice model from AI startup Sesame that has left many users both fascinated and unnerved. "I tried the demo, and it was genuinely startling how human it felt," wrote one Hacker News user who tested the system. "I'm almost a bit worried I will start feeling emotionally attached to a voice assistant with this level of human-like sound."

In late February, Sesame released a demo for the company's new Conversational Speech Model (CSM) that appears to cross over what many consider the "uncanny valley" of AI-generated speech, with some testers reporting emotional connections to the male or female voice assistant ("Miles" and "Maya"). In our own evaluation, we spoke with the male voice for about 28 minutes, talking about life in general and how it decides what is "right" or "wrong" based on its training data. The synthesized voice was expressive and dynamic, imitating breath sounds, chuckles, interruptions, and even sometimes stumbling over words and correcting itself. These imperfections are intentional.

"At Sesame, our goal is to achieve 'voice presence' -- the magical quality that makes spoken interactions feel real, understood, and valued," writes the company in a blog post. "We are creating conversational partners that do not just process requests; they engage in genuine dialogue that builds confidence and trust over time. In doing so, we hope to realize the untapped potential of voice as the ultimate interface for instruction and understanding." [...] Sesame sparked a lively discussion on Hacker News about its potential uses and dangers. Some users reported having extended conversations with the two demo voices, with conversations lasting up to the 30-minute limit. In one case, a parent recounted how their 4-year-old daughter developed an emotional connection with the AI model, crying after not being allowed to talk to it again.

Firefox

Firefox 136 Released With Vertical Tabs, Official ARM64 Linux Binaries (9to5linux.com) 49

An anonymous reader quotes a report from 9to5Linux: Mozilla published today the final build of the Firefox 136 open-source web browser for all supported platforms ahead of the March 4th, 2025, official release date, so it's time to take a look at the new features and changes. Highlights of Firefox 136 include official Linux binary packages for the AArch64 (ARM64) architecture, hardware video decoding for AMD GPUs on Linux systems, a new HTTPS-First behavior for upgrading page loads to HTTPS, and Smartblock Embeds for selectively unblocking certain social media embeds blocked in the ETP Strict and Private Browsing modes.

Firefox 136 is available for download for 32-bit, 64-bit, and AArch64 (ARM64) Linux systems right now from Mozilla's FTP server. As mentioned before, Mozilla plans to officially release Firefox 136 tomorrow, March 4th, 2025, when it will roll out as an OTA (Over-the-Air) update to macOS and Windows users.
Here's a list of the general features available in this release:

- Vertical Tabs Layout
- New Browser Layout Section
- PNG Copy Support
- HTTPS-First Behavior
- Smartblock Embeds
- Solo AI Link
- Expanded Data Collection & Use Settings
- Weather Forecast on New Tab Page
- Address Autofill Expansion

A full list of changes can be found here.
Youtube

YouTube Warns Creators an AI-Generated Video of Its CEO is Being Used For Phishing Scams (theverge.com) 16

An anonymous reader shares a report: YouTube is warning creators about a new phishing scam that attempts to lure victims using an AI-generated video of its CEO Neal Mohan. The fake video has been shared privately with users and claims YouTube is making changes to its monetization policy in an attempt to steal their credentials, according to an announcement on Tuesday.

"YouTube and its employees will never attempt to contact you or share information through a private video," YouTube says. "If a video is shared privately with you claiming to be from YouTube, the video is a phishing scam." In recent weeks, there have been reports floating around Reddit about scams similar to the one described by YouTube.

Opera

Opera Adds an Automated AI Agent To Its Browser (theregister.com) 23

king*jojo shares a report from The Register: The Opera web browser now boasts "agentic AI," meaning users can ask an onboard AI model to perform tasks that require a series of in-browser actions. The AI agent, referred to as the Browser Operator, can, for example, find 12 pairs of men's size 10 Nike socks that you can buy. This is demonstrated in an Opera-made video of the process, running intermittently at 6x time, which shows the user has to type out the request for the undergarments rather than click around some webpages.

The AI, in the given example, works its way through eight steps in its browser chat sidebar, clicking and navigating on your behalf in the web display pane, to arrive at a Walmart checkout page with two six-packs of socks added to the user's shopping cart, ready for payment. [...] Other tasks such as finding specific concert tickets and booking flight tickets from Oslo to Newcastle are also depicted, accelerated at times from 4x to 10x, with the user left to authorize the actual purchase. Browser Operator runs more slowly than shown in the video, though that's actually helpful for a semi-capable assistant. A more casual pace allows the user to intervene at any point and take over.

AI

Judges Are Fed Up With Lawyers Using AI That Hallucinate Court Cases (404media.co) 74

An anonymous reader quotes a report from 404 Media: After a group of attorneys were caught using AI to cite cases that didn't actually exist in court documents last month, another lawyer was told to pay $15,000 for his own AI hallucinations that showed up in several briefs. Attorney Rafael Ramirez, who represented a company called HoosierVac in an ongoing case where the Mid Central Operating Engineers Health and Welfare Fund claims the company is failing to allow the union a full audit of its books and records, filed a brief in October 2024 that cited a case the judge wasn't able to locate. Ramirez "acknowledge[d] that the referenced citation was in error," withdrew the citation, and "apologized to the court and opposing counsel for the confusion," according to Judge Mark Dinsmore, U.S. Magistrate Judge for the Southern District of Indiana. But that wasn't the end of it. An "exhaustive review" of Ramirez's other filings in the case showed that he'd included made-up cases in two other briefs, too. [...]

In January, as part of a separate case against a hoverboard manufacturer and Walmart seeking damages for an allegedly faulty lithium battery, attorneys filed court documents that cited a series of cases that don't exist. In February, U.S. District Judge Kelly demanded they explain why they shouldn't be sanctioned for referencing eight non-existent cases. The attorneys contritely admitted to using AI to generate the cases without catching the errors, and called it a "cautionary tale" for the rest of the legal world. Last week, Judge Rankin issued sanctions on those attorneys, according to new records, including revoking one of the attorneys' pro hac vice admission (a legal term meaning a lawyer can temporarily practice in a jurisdiction where they're not licensed) and removed him from the case, and the three other attorneys on the case were fined between $1,000 and $3,000 each.
The judge in the Ramirez case said that he "does not aim to suggest that AI is inherently bad or that its use by lawyers should be forbidden." In fact, he noted that he's a vocal advocate for the use of technology in the legal profession.

"Nevertheless, much like a chain saw or other useful [but] potentially dangerous tools, one must understand the tools they are using and use those tools with caution," he wrote. "It should go without saying that any use of artificial intelligence must be consistent with counsel's ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution."
Apple

Apple Unveils iPad Air With M3 Chip (apple.com) 42

Apple today announced a significant update to its iPad Air lineup, integrating the M3 chip previously reserved for higher-end devices. The new tablets, available in both 11-inch ($599) and 13-inch ($799) configurations, deliver substantial performance gains: nearly 2x faster than M1-equipped models and 3.5x faster than A14 Bionic versions.

The M3 brings Apple's advanced graphics architecture to the Air for the first time, featuring dynamic caching, hardware-accelerated mesh shading, and ray tracing. The chip includes an 8-core CPU delivering 35% faster multithreaded performance over M1, paired with a 9-core GPU offering 40% faster graphics. The Neural Engine processes AI workloads 60% faster than M1, the company said. Apple also introduced a redesigned Magic Keyboard ($269/$319) with function row and larger trackpad.
Google

Google Releases SpeciesNet, an AI Model Designed To Identify Wildlife (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: Google has open sourced an AI model, SpeciesNet, designed to identify animal species by analyzing photos from camera traps. Researchers around the world use camera traps -- digital cameras connected to infrared sensors -- to study wildlife populations. But while these traps can provide valuable insights, they generate massive volumes of data that take days to weeks to sift through. In a bid to help, Google launched Wildlife Insights, an initiative of the company's Google Earth Outreach philanthropy program, around six years ago. Wildlife Insights provides a platform where researchers can share, identify, and analyze wildlife images online, collaborating to speed up camera trap data analysis.

Many of Wildlife Insights' analysis tools are powered by SpeciesNet, which Google claims was trained on over 65 million publicly available images and images from organizations like the Smithsonian Conservation Biology Institute, the Wildlife Conservation Society, the North Carolina Museum of Natural Sciences, and the Zoological Society of London. Google says that SpeciesNet can classify images into one of more than 2,000 labels, covering animal species, taxa like "mammalian" or "Felidae," and non-animal objects (e.g. "vehicle"). SpeciesNet is available on GitHub under an Apache 2.0 license, meaning it can be used commercially largely sans restrictions.

Slashdot Top Deals