Discover how EU rules hit Apple's new features. Learn about the TikTok deal, the UK's AI fraud tool saving £500m, and the $3 trillion AI data center boom. Find out about AI bias in healthcare and new support for online creators.
2025-09-24
Remember when 2024 felt a little bit like the wild west of tech? Things are certainly getting real now. This year, we're seeing big policy battles. There are platform reforms, too. And a huge push for **AI**. All these things are shaping the next wave of tech. It's quite a ride, honestly.
So, let's grab a coffee. We can chat about what's really happening.
The European Union has some rules. They call it the Digital Markets Act, or DMA. These rules are causing some friction, like a pebble in your shoe. Apple, for example, says these "unfair" rules are delaying new features for its European users. Can you imagine waiting for a cool new app function? Then finding out it’s stuck because of, well, paperwork? That's pretty much what's happening.
Greg Joswiak, an Apple executive, made a comment. He said Brussels folks are challenging Apple's closed system. He feels it takes away the "magical, innovative experience" that makes Apple special. The company argues its system keeps things safe and high quality, like a strong lock on a door.
But EU regulators have a different idea. They say Apple unfairly pushes out rivals. They even hit Apple with a €500m fine back in April. Ouch. The DMA means things like headphones from other brands should work with iPhones. Apple also needs to let other platforms send and accept content with AirDrop. This is about giving consumers more choice, Sébastien Pant from BEUC explained. BEUC is a consumer advocacy group.
Still, Apple is pushing back. Their new AirPods Pro 3, with a "Live Translation" feature, are out in the US. But they are not in Europe. Apple says the tech needs AirPods and iPhones to work together. Opening it up to other devices would need more work. They want to keep things private and secure.
Other companies like Meta also held back features in the EU. This happened with Instagram and WhatsApp. Rules about collecting user data were the reason. It makes you wonder, will these rules slow down new ideas in Europe?
Across the pond, President Donald Trump says a deal for TikTok's US operations is done. For a while, it looked like TikTok might even get banned in the US. This was because of worries about its Chinese parent company, ByteDance. There were concerns about possible links to the Chinese government. Officials worried Beijing could get data on 170 million US users. Both TikTok and ByteDance say these claims are not true.
The plan now is to copy TikTok's algorithm. Then they will retrain it using US user data. Oracle, a US tech giant, will check this new system. A joint venture with US investors will run it. This aims to meet the requirements of a 2024 law. That law said TikTok had to be sold or banned.
But some experts are not so sure this will work out perfectly. Kelsey Chickering, an analyst at Forrester, points out a concern. She says the algorithm might "feel different" to users after being retrained. That could be a real sticking point.
The White House calls this a "win" for US users. China has been more quiet about it. They say they respect the company's wishes for commercial negotiations. The whole thing is a delicate dance, like walking on eggshells. It shows how much governments are looking at foreign-owned tech platforms.
Governments are also using tech to fight fraud. The UK government, for example, says a new AI tool helped get back almost £500m. That money was lost to fraud in the last year. This is a big deal! More than a third of that money came from Covid-19 pandemic fraud.
Ministers say this new AI tool will now be licensed to other countries. It's called the Fraud Risk Assessment Accelerator. Countries like the US and Australia will get it. Researchers in the Cabinet Office created the tool. It "scans new policies and procedures for weaknesses." It does this before they can be exploited. It aims to make policies "fraud-proof." That's a clever way to catch things before they go wrong, isn't it?
The savings from this effort will even go towards hiring more nurses. It will help hire teachers and police officers. That sounds like a smart use of AI.
TikTok is also taking steps. They want to "strengthen our platform for Canadians." Canadian privacy officials found that TikTok wasn't doing enough. They weren't stopping children under 13 from using the app. Officials also said TikTok collected sensitive data from many children. This data was used for targeted ads. TikTok says they'll add new measures. They do disagree with some of the findings, though. This shows that companies are also beefing up their defenses. They are working on moderation, too.
So, what about the actual building blocks of this AI revolution? Well, they're expensive. We're talking about huge investments in data centers. Morgan Stanley estimates about $3 trillion will be spent on AI data centers. This is worldwide by 2029. Half of that will be for building the centers. The other half will be for pricey hardware.
These are not your grandma's server rooms. AI models need massive computing power. They use expensive Nvidia chips. These chips come in cabinets costing around $4m each. These cabinets are packed close together. Every tiny bit of distance adds delays. That makes a big difference for AI. This "density" is key to making all the computers work as one huge brain.
Daniel Bizo works for The Uptime Institute. This is a data center consultancy. He says AI workloads are "unheard of" in their demand on the power grid. It's like "thousands of homes switching kettles on and off in unison every few seconds." Talk about a power surge!
Companies like Nvidia, Microsoft, and Google are pouring billions into energy projects. They want to keep these centers running. Some even look at nuclear power. But these data centers also use a lot of water for cooling. This raises environmental concerns.
It's a race to build the foundation for AI. There are big questions. Can this spending last forever? But as Zahl Limbuwala, a data center specialist, puts it: AI will "have more impact than previous technologies, including the internet." So, these "bragawatts" of power might be truly needed.
Beyond the infrastructure, AI is also changing industries. Consider healthcare, for example. Experts are talking about the pros and cons of using AI in this field.
AI can help with fraud detection. We saw this with the UK government's tool. Imagine AI helping doctors diagnose diseases faster. Or finding new treatments. That could be a real game changer. It has the potential to reshape how we think about health and medicine.
However, there are also ethical concerns. An AI tool used to fight welfare fraud last year showed bias. It was biased based on age, disability, marital status, and nationality. This is a red flag. We need to make sure AI is used fairly. It should not create new problems. Amnesty International even criticized the UK government. They were worried about its "unchecked use of tech and AI systems."
It's clear AI offers huge possibilities. But we need to tread carefully, like walking through a minefield.
Finally, let's talk about the people who make content online. YouTube creators, influencers – they are a big part of the economy. A report found that YouTube creators added £2.2bn to the UK economy in 2024. They also supported 45,000 jobs.
Now, a new group of MPs is stepping up. They want to represent these creators and influencers. Feryal Clark, a Labour MP, called them "trailblazers of a new creative revolution." She says they've been "undervalued in Westminster for too long."
Creators like Lilly Sabri post fitness videos on YouTube. She welcomes this news. She says people often questioned if being a content creator was a "real job." This new group aims to tear down barriers. They want to champion creators. They want to make Britain a global leader in creativity and innovation. This is a positive step. It shows that policymakers are starting to understand the value of online talent.
So, what's the big picture here? The tech market today feels like a juggling act. On one hand, we have this incredible drive towards higher-end **AI**. Think about those massive data centers! On the other hand, there's a tightening grip from regulators. And a big push for security.
This tension is both exciting and a bit scary. It fuels growth, but also makes companies more cautious. Leaders, creators, and investors need to find a balanced path. This means building AI responsibly. It means following the rules. And keeping people safe. It’s about being smart and thinking ahead.
What do you think about all these changes? Are you excited for the future of AI, or a bit worried about the regulations? Let me know in the comments below!
This article is part of ourRegulation & Policysection. check it out for more similar content!