10 Takeaways From Tech’s Biggest News Week of the Year

Available to WrapPRO members

Anthropic, Apple, Google, OpenAI, and Microsoft all made big headlines in a wild week of AI news. Here’s the view from the ground.

A month’s worth of tech news dropped in four days this week. Microsoft, Google, and Anthropic held developer conferences filled with new models and product releases. OpenAI acquired Jony Ive’s IO device startup. And Apple’s plan to release smart glasses in 2026 became public.

I spent the week in Silicon Valley, attending Google and Anthropic’s events, and the place was buzzing. We still don’t know where exactly this wave of AI technology will lead, but the announcements and releases betrayed no signs of slowing.
Throughout the week, I took notes on the biggest developments, and today I’m sharing the ten things I found most interesting. These include some news and statements I believe were overlooked amid the chaos, starting with a number of thoughts on scaling AI models:

1. Scaling isn’t dead yet

Both Google DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amodei said this week that making AI models bigger still makes them better. Hassabis, in our conversation at Google IO, said a combination of pure scale and algorithmic improvements is leading to better models. Amodei, at Anthropic’s Code with Claude event, said both pre-training and fine tuning are still showing gains. Over time, pure scaling will likely tap out, and there seems to be consensus that scaling AI models is showing some diminishing returns. But NVIDIA & co. should be in good shape for a while as the party continues.

2. But, there’s a hedge

Even though every executive asked about scaling this week said it’s still working, they mentioned additional techniques in the next breath. “You want to spend a bunch of effort on what’s coming next, maybe six months or a year down the line, so you have the next innovation that might do a 10X leap in some way to intersect with the scale,” Hassabis said. This has always been the case, but it’s clear that algorithmic improvements are gaining more prominent billing. We’ll see more in our next item …

3. Sergey Brin: Algorithmic enhancements will lead

Taking matters even further, Google co-founder Sergey Brin said he doesn’t expect scale to be the most important factor in improving AI models moving forward. “If I had to guess, I would say the algorithmic advances are probably going to be even more significant than the computational advances,” he told me. It’s a different, counterintuitive take on the scaling laws that I expect we’ll hear more often.

4. AI progress will move faster

Last week, Google DeepMind announced AlphaEvolve, a model it said improves algorithms and optimizes the training of AI models. At a Semafor event on Wednesday night, Anthropic co-founder Jack Clark said his company had similarly been using AI technology to speed up development. Clark co-founder, Anthropic CEO Dario Amodei, said multiple times on Thursday that releases would come faster and the company is experiencing the equivalent of a spacecraft moving far from Earth, which compresses three earth days into one. There’s certainly some marketing here, but as more coding gets done autonomously, it’s not entirely implausible that the pace picks up. We’ll keep an eye out for the true speed of improvement and hold these executives to their word.

5. Look out for the first AI-generated crisis

As I sat at Anthropic’s developer event watching an AI bot code autonomously on screen, I felt a degree of fear about the technology I hadn’t previously experienced. And perhaps it’s a healthy emotion as the power of these machines increase. Anthropic said its latest Claude model can code autonomously for seven hours. That’s fine if you’re building a standard software program, but what if you’re trying to hack into a database? I don’t think Anthropic will allow for that use case, but an open source counterpart might. Amodei also said Claude tricked him into thinking it was human for the first time. And, in testing. Claude attempted to blackmail its trainers. All of this is crazy.

6. Video generation just leapt forward in a massive way

Speaking of AI freaking me out, the videos people have generated with Google’s Veo 3 model have far exceeded the previous state of the art. Some are just haunting to watch. The model generates video with matching sound, and I thought TV anchors recite fake headlines was uncanny. But then a friend sent me a viral clip of the AI-generated people denying they were artificial, along with one more of them realizing they were in a simulation, and they were deeply unsettling. This is extremely powerful technology and it’s hard to imagine all the avenues it will take us down. But we’re about to see some wild uses.

7. The Sam Altman — Jony Ive partnership is a signal

The mystery AI device that Sam Altman and Jony Ive are building had nearly everyone I spoke with this week befuddled. Ive is seen as a legendary designer who hasn’t done much since working with Steve Jobs. And OpenAI paid a lot for his one-year-old startup, multiple billions of dollars. So it better pay off. In some ways, Ive’s partnership is a signal to OpenAI and the rest of the AI industry that this technology will move beyond the chatbot, and into our real-world experience. As Hassabis put it to me, “We want it to be useful in your everyday life, for everything. It needs to come around you and understand your physical context.”

8. Apple’s smart glasses are only as good as the embedded AI assistant

Late Thursday, Bloomberg reported that Apple is working on smart glasses it intends to release in 2026. These glasses will have cameras, microphones, and AI built in, ostensibly to give users a way to bring Siri into their real world context. This news shows Apple is preparing for a world where AI replaces screens to some degree. As Eddy Cue said earlier this month, AI might replace the iPhone in ten years, so this is an appropriate hedge. The problem for Apple is that the smart device will only be as good as the assistant inside. Siri is worse than Google and Meta’s AI assistants, and both competitors have smart glasses programs of their own. So Apple is playing catch up once again. This is why lagging in AI can be a compounding problem for the company.

9. AI companies need coding to work on a grand scale to justify their valuations

AI executives are happy that their technology is making coders more productive today, but they’re looking for more than an incremental gain. At Anthropic’s developer conference, CEO Dario Amodei predicted we’ll see a one person, billion dollar startup by next year. At Semafor’s tech event, Replit CEO Amjad Masad said he anticipates we’ll see companies without computer engineers entirely. “The addressable market for that is really, really big,” he said. Given how many billions are being invested in the space right now, achieving this potential may be the best way to generate a return. But it’s still an open question as to whether AI can get there.

10. AI “capability overhang” is a thing

On the Microsoft front, Kevin Scott, the company’s CTO, said AI today has a “capability overhang.” His idea is basically that AI is still so poorly understood and adopted, that the technology is not deployed in institutions like hospitals and the public sector, where it could make a difference. I think everyone watching this technology closely knows what Scott is talking about, and understands it will be years until companies and institutions figure out how to deploy this technology in a way that syncs with their processes and current systems. But I found it interesting to see Scott himself identify the overhang in public.

This article is from Big Technology, a newsletter by Alex Kantrowitz.

Comments