Friends of PRISM,
I’m back! After a short hiatus aimed at reducing Elon Musk’s underpopulation concerns, I’ve returned from paternity leave. I’m excited to be back at work and, with it, this newsletter.
I checked out while on leave, so seeing how things changed since I left is interesting. Here are a few things I was looking out for:
AI policy - It seems the debate hasn’t changed: AI companies keep talking about existential risk, which is conveniently aligned with their hype-building strategies. The experts keep pointing out this is a distraction. At least one country has fallen hook, line, and sinker.
Section 230 cases - The Supreme Court punted: This outcome was expected and ultimately anticlimactic. But that’s one major uncertainty out of the way for 2023, and it clearly shows that the Court isn’t willing to legislate from the bench on complex tech topics.
The EU’s Meta data transfer decision - This is surprisingly powerful: Meta was fined €1.2B as the GDPR celebrated its fifth birthday. The real effect is that it questions the legal basis for transferring EU user data to the US. This is the type of enforcement and impact that has been possible for years without the need for fancy new rules. We’re only just starting to see the real impact.
Now onto this edition. I’ll go where my mind is focused a lot right now: Child safety.
Good to be back.
THE BIG TAKE
Won’t somebody please think of the adults?
Child safety is a hot topic in tech policy right now. What’s happening now is a trojan horse for implementing more radical policy across the industry.
A lot is happening in child safety. Concerns around how tech impacts minors are driving policymakers to take on tech from many different angles: data and privacy, tech addiction and mental health, “sharenting”, and so much more.
144 child safety bills across 43 US states have been introduced so far in 2023. This barely scratches the surface:
In the US, Amazon was fined last week by the FTC over a COPPA violation (adding to several other recent child safety fines by the FTC), various federal bills exist targeting design (Kids Online Safety Act) and sexual exploitation (EARN IT Act), President Biden signed an executive order concerning youth mental health, and the Surgeon General released its report on social media’s impacts on minors.
Elsewhere, there’s quite a bit of action, as well. The EU is working on new child safety rules. The UK introduced its age-appropriate design code (AADC) rules in 2020 (which inspired other bills, including California’s) and is discussing its Online Safety reform. Australia introduced a safety-by-design approach. In some of these discussions, radical trade-offs, like breaking encryption to enable child protection, are being discussed and may be implemented.
Multilateral organizations are acting - The ITU developed Child Online Protection guidelines. The UN has several initiatives (like this on cyberbullying). The WHO is monitoring the situation, and the WEF is… talking and stuff.
This is really just a smattering.
But won’t somebody think of the adults?
Many of the reforms we see today would never have a chance to progress if they weren’t framed around protecting children. But protecting children is often the first domino to fall when it comes to broader regulation. This is exactly how it has worked in the past - limits on lead in children’s toys spurred broader consumer protection rules, regulations around protecting children from household poisons paved the way for wider safeguards, and even early worker safety laws started with children.
Tech policy looks to be going to same way. Child safety is the perfect trojan horse for broader regulation. Nobody can oppose it, allowing policies to form a beachhead in the industry.
Here are three tangible ways child safety is shaping the future:
Setting precedent. This is being done in tactical and strategic ways:
Tactically: Cases like the FTC’s case against Amazon interpret vague legal language in ways that will bind future decisions. In the Amazon case, a legal debate occurred around what counts as a “reasonable” amount of time to keep data (easy answer in this case as they kept it indefinitely, but you get the point).
Strategically: Key decisions will be made about long-standing debates - in particular the debate over whether it's acceptable to create backdoors to encryption to protect (child) safety. Same with anonymity and privacy debates.
Forcing voluntary action. Social media platforms like Meta and TikTok have voluntarily launched child safety features, including time-spent measures and parental controls. The child safety push is forcing companies to think harder and create features that are “oven-ready” (to use a Boris Johnson term, RIP) and that adult users may then want for themselves.
Testing ground for ideas. A large range of inventive and radical policy ideas will be debated and implemented, creating an evidence base for what works and what doesn’t and inspiring a new set of policy proposals as it evolves.
We can already see the next wave of policy ideas forming.
Actual bans: Proposals to age-gate social media for kids are, in effect, a ban.
Real limits on data: Many proposals say companies can’t keep and use data that comes from children at all.
Encryption and anonymity breaking: As noted above, proposals to protect children require identifying children via age verification – that is to say, identity. That violates privacy and anonymity principles that were once held sacrosanct.
Design rules: Many ideas included in AADC-style bills focus on how platforms are designed. Approaches like privacy-by-design or -by-default create far more fundamental obligations for platforms in how they build products. You can’t just tweak something to comply; you need to build it from the ground up with safety in mind at every step.
Targeted, “Tobacco causes cancer”-style research: Compared to wider tech policies that are opening up platform data to researchers in general, child safety advocates are seemingly out trying to prove an existing, damning hypothesis (e.g., Senator Hawley’s proposal wants to fund government research on how social media impacts children).
They’ll protect the adults eventually. To date, tech policy outside of child safety has been a discussion of trade-offs - between safety and anonymity, harm reduction and innovation, and so on. In many of these trade-offs, the industry gets its way. Child safety flips the debate; protecting children almost always comes out on top.
Politically, the adults aren’t quite ready to push politicians for the same action. Political leaders still highlight their meetings with tech CEOs, showing the aura of the industry is still positive. The discrepancy is very clear in polling between people's views on tech and child safety (negative) and tech overall (positive).
But that may well start to change soon. As AI concerns build, the potential for harm from tech is growing in the public eye. The sheen of economic dynamism is wearing off as layoffs bite. And the personal flaws of tech geniuses are becoming more obvious (ahem).
Once the politics shift, child safety action today will matter a lot to the policy environment we see then.
So what does this all mean?
With literally hundreds of proposals and bills, you should be tracking what’s happening in the child safety arena, even if you’re not directly involved.
If you are involved, it’s time to get strategic and consider what an “end-state” after all of this activity might look like.
You can take solace that at least some action is being taken to protect our children. We’re still waiting on the same for the adults.
Top 5 - The eye-catching reads
The Digital India Bill is coming: The long-awaited reforms will be sweeping and of huge importance – not just for India’s domestic tech sector but also for the US companies operating there. Plus, other countries will be watching for ideas to inform their digital policy approaches.
Microsoft is always ahead of the regulators: As I said in a prior edition, Teams is surely one of the next sources of antitrust scrutiny, given how dominant it is (bonus chart of the week!). Microsoft, wily as always, stopped bundling Teams with Office, a sign they think scrutiny will come sooner rather than later.
How China imposes sanctions: We got a hugely insightful report from MERICs concerning China’s approach to sanctions as it expands its unilateral measures.
Of course, this would be the Japanese contribution to AI: Fujitsu developed a model to measure concentration during tasks. A nightmare for micro-managed employees the world over. Jokes aside - Japan made an important mark in AI policy, deciding copyright doesn’t apply to AI training.
The US is done with Web3; China isn’t: The US SEC may finally be fully breaking the crypto industry in the US. But China may be starting to warm to Web3 (despite having banned crypto). Beijing released a Web3 White Paper and is pushing investment in the industry. While quite metaverse-focused, it does note blockchain as a key building block.
Sidenote for an amazing quote from the SEC’s Coinbase investigation: “As Binance’s CCO bluntly admitted to another Binance compliance officer in December 2018, “We are operating as a fking unlicensed securities exchange in the USA bro.”
My top charts of the week
If you work at a tech company – specifically David Sacks – you might have a warped sense of the market right now…
First, tech is the only sector doing big layoffs. Second, it’s the only sector where the market is rising.
AI jobs wipeout…or not
AI has been performing better than radiologists for years already. The AI jobs wipeout is a fakeout.
Low interest rates drive real change
A new spin on the term “low interest rate phenomena”