Anthropic shipped 74 releases in 52 days. Google launched a live AI you can literally show your screen to. And OpenAI quietly killed its own video generator. Here’s what it all means for your work.
Claude can now take over your mouse and keyboard. If you’re on a paid Anthropic plan ($20/month), you can give it access to your computer and walk away.
Google’s Gemini 3.1 Flash Live lets you have a real-time conversation with AI while it watches your screen. It’s genuinely useful for getting guided help on software.
OpenAI is shutting down Sora, its video generator. It also killed a Disney partnership and dropped plans for an adult content mode, all in the same week.
A leaked Anthropic blog post revealed a new model tier called “Claude Mythos” that’s more powerful than Opus. It’s expensive to run and Anthropic itself is flagging cybersecurity risks.
It was one of those weeks in AI where you blink and miss something significant. Anthropic alone shipped so many updates that an independent tracking site, The Product Compass, put together a calendar just to document it. [1] Their count: 74 releases in 52 days. At least one new thing, every single day.
But quantity isn’t the story here. The story is that some of these releases are genuinely changing what AI can do for you on a day-to-day basis. Claude can now use your mouse. Google’s AI can see your screen. OpenAI is pulling out of video entirely. And an accidental leak has given us the clearest look yet at what’s coming next from Anthropic.
Here’s everything that matters from this week, explained in plain English with no hype and no fluff.
This is the one to pay attention to. On March 23rd, Anthropic rolled out computer use inside Claude’s Co-work and Claude Code products. [2] What that means in practice: you give Claude permission to control your mouse and keyboard, and it physically clicks around your screen on your behalf.
You need to be on a paid Anthropic plan (the $20/month plan qualifies) and you need to enable it. Go to your Claude desktop app settings, click General under the Desktop App section, and switch on Computer Use. Once it’s on, you can type something like “open Da Vinci Resolve and show me where the magic mask feature is” and Claude will actually do it hands-free.
Heads up
It’s slow. Noticeably slow. Tasks that take you 10 seconds might take Claude 5 minutes. Right now, this feature is more impressive than it is practical for everyday use. But combine it with Anthropic’s dispatch feature (which lets you trigger Claude from your phone while you’re away from your desk) and the value shifts. You don’t care how long it takes if you’re not there waiting for it.
The key phrase there is “while you’re away from your desk.” The use case that makes sense today isn’t replacing what you do at your computer in real time. It’s setting things running while you’re in a meeting, on a commute, or stepping away. That’s a genuinely different kind of productivity than AI chatting with you.
Try this on Monday
If you’re on a paid Claude plan, go to Settings in the desktop app and turn on Computer Use. Then ask Claude to open a specific app and navigate to a feature you rarely use. Even if it’s slow, it’s worth seeing what “AI using your computer” actually looks like in practice. It’ll change how you think about what’s coming.
Two other Anthropic updates worth knowing about this week.
First, Claude Co-work now has a Projects feature. You can create a named project (say, “Client X onboarding” or “Q2 marketing planning”), give it custom instructions so Claude always knows how to behave inside that context, and attach relevant files. It works exactly like Projects inside the main Claude app or ChatGPT, but now it’s inside the Co-work environment. This matters because it means you can have persistent, focused AI assistants for different parts of your work rather than starting every session from scratch. [2]
Second, Claude Code got auto mode. If you’ve ever given Claude Code a task, walked away, and come back to find it’s been asking for permission to do something for the past 20 minutes without actually doing it, this update is for you. Auto mode removes the constant permission prompts for lower-risk actions, like running common terminal commands or doing a quick web search. It just gets on with the job. [2]
Why this matters for non-coders
The Projects feature is the one you can use right now, even if you never write a line of code. Set up a project with context about your team, your clients, or a specific goal and Claude will hold that context every time you open it. No more re-explaining who you are and what you’re trying to do.
Google had a significant week too. They released Gemini 3.1 Flash Live, a version of their Gemini model designed for real-time conversation. [3] But “conversation” undersells what it can actually do.
You can talk to it. You can point your webcam at it and it describes what it sees. You can share your screen and it watches what you’re doing and answers questions about it. Ask it “how do I do X in this software?” while your screen is shared and it tells you, step by step, pointing at the actual elements on your screen.
Gemini 3.1 Flash Live is available in Google AI Studio, the Gemini API, enterprise accounts, Gemini Live on mobile, and it’s rolling into Google’s AI mode in search. You can try it at studio.google.com right now.
Practical use case
The next time you’re stuck on a piece of software you don’t know well, try opening Gemini 3.1 Flash Live, sharing your screen, and asking it to walk you through what you’re trying to do. Think of it like having a patient technical colleague looking over your shoulder. It’s one of those Google AI features that is genuinely underrated right now.
Google also showed off an experimental AI-generated browser where typing anything produces a brand new page in real time, generated on the fly by Gemini 3.1 Flash. No page exists until you ask for it. It’s a proof of concept for now (no memory, nothing gets saved), but it gives you a sense of where Google is heading with AI-native web experiences. [3]
And if you’ve been holding off on switching to Gemini because you have years of memory and history in ChatGPT or Claude, Google just made it a lot easier. You can now migrate your memories, preferences, and chat history from other AI platforms directly into Gemini. They’re doing exactly what Anthropic did a few months ago to capitalise on users who wanted to leave OpenAI. [3]
This was probably the biggest single news story of the week, even though OpenAI made it relatively quietly. Sora, their AI video generator, is being shut down. Not just the app. The API too. OpenAI is getting out of video generation entirely, at least for now. [4]
The stated reason: they’re going to focus on what they do best, their chat and coding models, and stop splitting resources across side projects that aren’t central to that. Sora ate up a lot of compute. Most people who use OpenAI are there to write, research, and code, not to generate video clips. On that logic, cutting Sora makes sense.
But here’s where it got messy. OpenAI had recently signed a partnership with Disney that would give Disney access to OpenAI technology, including the ability to generate content involving Disney’s IP. The moment Sora’s shutdown was announced, Disney walked away from the deal. According to reports, Disney and OpenAI teams were still actively working together as recently as the Monday before the announcement. [4] Disney found out the same way everyone else did.
The pattern to watch
OpenAI also killed its planned “adult mode” feature in the same week. The two things OpenAI is quietly dropping, a flashy video product and a risque content feature, happen to be the two areas where their main competitors (Runway, Kling, and XAI’s Grok) are doubling down. Whether that’s smart focus or a competitive retreat, only time will tell.
For most professionals, the practical impact of Sora’s shutdown is minimal. If you were using Sora for video generation, you’ll need to find an alternative. Runway, Kling, and Pika are all still operating. But the decision signals something about where OpenAI is placing its bets: on becoming the world’s best thinking and coding partner, not a general creative suite.
Here’s the one that has the AI community talking. Anthropic accidentally left a pre-publication blog post visible on their website this week. [5] Someone found it, copied it, and posted the key details before Anthropic took it down. The post has since been removed, but the details are out.
The new model tier is called Claude Mythos. Based on the leaked text, Mythos sits above the existing Opus models in terms of capability and scores dramatically higher on tests for software coding, academic reasoning, and cybersecurity. [5]
There are two things in that leak worth paying attention to.
First, the cost warning. Anthropic described Mythos as “very expensive to serve” and said it will be “very expensive for customers to use.” This is not a model that’s going to be available in your standard $20/month subscription. It’s positioned as a high-capability, high-cost option for organisations with serious use cases.
Second, the safety flag. The leaked text includes a statement from Anthropic that the upcoming wave of models like Mythos “can exploit vulnerabilities in ways that far outpace the efforts of defenders.” [5] They say they want to release it with extra caution and share results with cybersecurity professionals to help them prepare. That’s a notable thing to say publicly, even in a document that wasn’t meant to be public yet.
What to make of this
Take the specifics with a grain of salt since this was a leaked draft. But the direction is clear: more powerful models are coming, they’re going to cost more, and the companies building them are starting to be more open about the risks. That’s worth knowing, whether or not you ever use a model at the Mythos tier.
A lot happened this week beyond the big stories. Here’s the short version of everything else worth knowing.
Suno’s latest version lets you record your voice and have it generate songs using your actual voice. Even a speaking voice with no singing ability. The results are surprisingly decent. It’s available to Pro and Premier subscribers inside the Suno app under the Advanced settings. [6]
Google’s AI music generator, Lyria, moved from 30-second clips to full tracks up to 3 minutes long. You can now prompt for specific song structures: intros, verses, choruses, bridges. It’s rolling out across Gemini, Google AI Studio, Vertex, and Google Vids. [3]
OpenAI launched richer product discovery inside ChatGPT. When you search for products, you’ll see visual, comparable listings. E-commerce businesses can now submit their product feeds to get listed. It’s free for now, but the groundwork for paid product placements is clearly being laid. [4]
Wikipedia officially banned AI-generated content for creating full articles. Editors can still use AI for basic copy editing and translation. The reasoning: if AI models train on Wikipedia, and AI then writes Wikipedia, you get model collapse over time. It’s a sensible line to draw. [7]
A US federal judge halted the Trump administration’s designation of Anthropic as a “supply chain risk,” citing free speech violations. It’s a partial win: Anthropic still faces a second, separate legal challenge under a different statute that remains in effect. [8]
Early advertisers on ChatGPT have reported they can’t measure whether their ads are generating any business outcomes. OpenAI needs advertising to work to subsidise free-tier compute costs. Right now, the model is unproven and advertisers are publicly saying so. [4]
Three things are shifting at once and they’re worth naming clearly.
AI is moving from “assistant you talk to” to “agent that acts.” Computer use, auto mode, dispatch from your phone: these aren’t features that help you write better emails. They’re features that let AI do things without you. That’s a qualitatively different relationship with the technology and it’s worth thinking about where that fits in your work before it arrives whether you’re ready or not.
The big players are making very different bets. Anthropic is shipping relentlessly and quietly, adding capabilities across the board. Google is going deep on live, multimodal, real-time experiences. OpenAI is cutting to focus on what it’s best at. Three different strategies from three dominant players, which means the best tool for your specific job is increasingly going to depend on what job that is.
And the models are getting more powerful faster than most people outside the industry realise. The Claude Mythos leak isn’t just an interesting story. It’s a reminder that what exists today is not the ceiling. Not even close.
Does Claude’s computer use feature work on any plan?
You need to be on a paid Anthropic plan. The $20/month Claude Pro plan qualifies. You also need to enable it manually: go to Settings in the Claude desktop app, click General under Desktop App, and toggle Computer Use on. It won’t activate automatically.
Can I still use Sora if I was using it for video generation?
Sora is being shut down, including the app and the API. If you were relying on it, you’ll need to switch to another video generation tool. Current alternatives include Runway, Kling, Pika, and Luma. Each has a free tier to try before committing.
What is Claude Mythos and when is it being released?
Claude Mythos is a new tier of model above the current Opus models, revealed through a leaked Anthropic blog post. No official release date has been announced. The leaked post suggests it will be significantly more expensive than existing models and Anthropic wants to release it carefully given its cybersecurity implications.
How do I try Gemini 3.1 Flash Live with screen sharing?
Go to studio.google.com and switch the model to Gemini 3.1 Flash Live. Once selected, you’ll see options to enable your webcam or share your screen. Click the screen share option, then start asking questions about what’s on your screen. It works best for walkthrough-style questions about software you’re currently using.
Should my business care about OpenAI adding shopping to ChatGPT?
If you sell products, yes. OpenAI is currently letting e-commerce businesses submit product feeds for free to appear in ChatGPT’s product discovery results. Getting in early, before it becomes a paid placement system like Google Shopping, is worth looking into. Search for “OpenAI product feed submission” to find the current process.
Sources