How to restore the Internet?

AUSTIN, Tex. Last year’s upheaval in the tech world turned conventional wisdom on its head: transformation also-ran among the famous, titans in turkeysand even the financial system itself upside down.

So, naturally, a cadre of ambitious technologists collected at SXSW are ready to take a chance this year. And not just necessarily to make a quick buck, but to ask bigger questions like: What if we used this moment as a real catalyst for change?

Some even more ambitious digital futurists suggest entirely new ways of thinking about the Internet—ways that consider the risks and opportunities of artificial intelligence, the burnout and nastiness of social media users, or the constant reminders that our most personal information is terribly insecure. .

All this yesterday afternoon at “Open innovation. “breaking the default together” was the theme of the panel, where two senior Mozilla Foundation officials, along with an industrial designer, outlined their vision for an Internet free of economic structures and incentives. which have caused much of the digital heartburn of the past decade.

After the talk, I spoke with one of the panelists, Liv Erickson, team lead for Mozilla’s VR-focused Hubs project. We talked about what’s at stake in how the internet is currently designed, and how we can make it better as artificial intelligence and virtual reality reshape it before our eyes (sometimes literally).

The conversation has been condensed and edited for clarity.

When you talk about fixing the Internet, you’re quick to focus on digital assets. Why is it so central to these big questions about the architecture of the Internet?

There is a lot of evidence of how people interact with each other online and it shows a very enthusiastic approach to content creation. People want to share their experiences and talk about what’s important to them. Right now, what we’re seeing is that it’s really hard to build audiences and take ownership of that content. That means if a platform changes its terms of service or shuts down because it’s no longer profitable for that company, much of that content could simply disappear.

Last week, one of the earliest social VR platforms, Altspace has been closed. People mourn that experience because they’ve lost not just videos and photos, but entire worlds they’ve built, social connections they’ve made, and versions of themselves. When we think about this next generation of the Internet and what it might become, data ownership is a key component because of the enormous amount of psychological and emotional attachment we have to our online identities.

What are the future threats to Internet users that worry you the most?

Data collection is a big part of that. But then it’s also about what applications do to respond to that. This is one of the reasons why generative AI is great, but it’s also scary. When you think about it on a longer-term, more dystopian horizon, what can people do with your information to immediately change the environment you’re in?

Philip Rosedale talked about ethics at XR on Sunday, and he made a really good point that in the physical world we generally know when we’re being advertised to. But being here at SXSW, you’re being advertised to all the time, and you don’t necessarily know that it’s advertising, which I mean in terms of immersive worlds and XR technology.

What happens when I think I’m interacting with a friend or co-worker in VR, and it turns out it’s actually just a bot that develops a relationship with me and happens to always be online when I I’m online. and starts telling me their political views and I start asking if they should be my political views. There is much to be learned about the manipulation of information in virtual spaces.

What role should US technology policy play in protecting the future of the Internet?

When I look at what is being talked about in terms of data privacy, many words that are used to describe the types of personal information collected can be: [vague]as in, “We’re not actually collecting biometric data on the headset,” in most cases. But this is assumed, and inferred data is not covered by some data privacy laws.

It is very important to think about this from a consumer protection perspective. I was a political liaison with the Aspen Institute a few years ago, and even then I imagined this world where advertisers could scrape the profile pictures of my Facebook friends and create a person who looked like one of my friends and start using them. in their advertising. I would have no way of knowing that’s what they were doing. It’s not technically my personal information, but it’s meant to play on my emotional experiences. I think that’s an area that the FTC could look at.

Has the rise of generative AI made your job seem dramatically more urgent?

I think what we are able to do with these tools is incredible. I also want people to dig down another level and understand how they can be used for harm, and I think that’s usually where the conversation stops; I have friends and colleagues who will come up and say, look, I’ve made this great art where I’ve collaborated with an artist through generative AI, and it’s like… is it a collaboration if the other person isn’t in it?

How do you feel about the idea that blockchain could be a solution to digital property problems?

I don’t think technology by itself is ever really the solution.

I’m a big proponent of people owning the value they create, so I think a lot of the principles of distributed systems are really important and powerful. The exciting thing about Web3 spaces is that more people are realizing that it can be a tool for them to take back creative control and ownership of what they do. I also think there are places where it’s being pitched as a solution to problems it’s not really going to solve. Any time you take a technology and say that this technology is going to solve a human problem, that’s when my alarm goes off.

Is there a simple rule that observers or policymakers can apply to determine whether a new technology or platform is designed for users as you describe it?

How does it generate income? It forces people to talk about the decisions they make about whether or not to sell data. And is technology able to speak to the basic, underlying need? what does this solve for people? What does it give them in everyday life?

The most dangerous trap we can create for ourselves is telling ourselves that we have to do things because that’s the way we’ve always done them. This is a pivotal moment, and I want as many people as possible to question it as we learn about these new technologies. The software that creates these virtual worlds allows us to try new things and actually say: “You know what, I liked it better. I liked that version of me better.”

OpenAI released the sequel to its internet-breaking GPT-3 of yesterday, and GPT-4 is already changing the way people think about what wide-language models can do.

In blog post Introducing its release, OpenAI describes its “best results (although far from perfect) in terms of realism, controllability, and refusal to go out of guardrails.” (You are informed Valuigi-lovers.) It is also capable of recognizing and describing images with considerable accuracy, including relatively complex memes which require an understanding of irony that frankly eludes many people.

GPT-4 doesn’t quite represent the conceptual leap that its predecessor made when OpenAI released it. But it does make some predictions about the power of technology seem far more feasible and easy to conceptualize than before.

For example, Sam Hammond of the Niskanen Center, based on his predictions earlier blog post, exploitedOne click lawsuits“it can afford “will bring an endogenous version [Charles Murray’s] suggestion for a liberal legal defense fund that allows people en masse to challenge crappy laws and regulations until the system is clogged with legal civil disobedience at scale.”

Maybe an unlikely winner during last weekend’s bank collapse, which dealt a major blow to crypto, Stablecoins.

POLITICO’s Bjarke Smith-Meyer has report for Pro subscribersdescribing how Circle’s USDC stablecoin has “fully returned to parity, undermining the central bank’s talking point that the asset class poses an unnecessary risk to the financial system” after Monday’s plunge.

Still, it might not be anything specific to the technology that kept it, well, stable, other than that USDC holdings are subject to the same rules as everyone else. “Rather than being a vindication of crypto’s power, the USDC could be a beneficiary of the broader system’s regulated status,” Bjarke writes. “It was only after US regulators stepped in to guarantee SVB’s deposits to prevent further panic that the USDC’s decline eased.”