AI Developers Want To Be Regulated. And With Good Reason.

Tools    





there's a frog in my snake oil
Hey all.

You've probably noticed that various AI luminaries have been calling for everything from regulation of their industry up to a complete pause for new product releases.

It's easy to see these calls as an attempt to freeze the marketplace while they're in a dominate position. But I'd say there's more than that going on...

---

Mutually Assured Discussion:

As the existing court cases over copyright demonstrate, there's a whole new legal world to be explored. And given the current tools contain capabilities that even their creators don't know about (cutely described as 'capability overhang'), it seems reasonable to assume that this legal world is going to get fairly ample fairly quickly.

As much as businesses love being the new disruptor, they like the predictability of a settled battlefield too. And my suspicion is: These guys know they're sitting on the most disruptive set of innovations in a helluva long time. Ones that may well run out of their control in various ways, and smash a few more norms than they're comfortable with. And they'd quite like to be prepared...

---

Can We Do That Dave?:

To delve into why they might feel that way, we'd have to get more technical.

But thankfully there's a simple overview of the key negatives at play (which are gliding along somewhat unseen, thanks to the many positive advancements that these technologies are also bringing).

It's great, and you should watch it



Unfortunately it's also an hour long, and contains some TED-talk tech-bro framing that may put some people off before they get to the meat. So I'll do my best to summarise the key contentions:
  • Current LLM techs are potent and bring large challenges for social norms and institutions.
  • The growth of many current capabilities are 'exponential', and can boost other capabilities in turn.
    • This synthesis, speed of growth, and potential for novelty means we should expect more extremely disruptive techs and applications to arrive with increasing rapidity.
  • Putting these technologies at the heart of an arms race for public adoption & market share will only drive the worst end of the above.
    • A pause on new public builds, and a cap on compute for R&D, would give the world time to catch up on what we've already got.

---

A Positive Spin, And Then Back Again:
  • There are some very cool technologies being juiced by LLMs as well. It's crazy to see some known ones zoom along suddenly, and crazy to think what will be achievable next. (Crazy in a good way, as much as possibly crazy in a sharply disruptive way. Recreating dreams is insane. Decoding someone's thoughts without their permission, equally so...).
  • Some of the trends may cap out. (Just because GPT models have demonstrated the 'Theory of Mind' of a 9 year old in the space a few years doesn't mean they'll keep rocketing along that road). But it's still worth being prepared for a world where they stay on trend.
  • None of the above is suggesting 'AI sentience' is imminent, with all the Sci-Fi movie plots that would imply. (But that is still on the long list for later on ).

What I found mind-blowing about the video, at core, was that I've watched a lot of these technologies inch along over the last decade. And seen them start to explode in the last few years. And taken individually, I've always been dazzled by their positive potentials first, and left their negatives very much in the background.

But I think stripping away the dazzle, and just noting how the negatives synthesise alone, is instructive in this case. They are equally potent. And ignoring them won't make them go away.

---

Too Long, Got a Bot to Summarise:

The genie is out of the bottle. We should enjoy the magic it can perform, and have a play. But personally I'm all for a 'pause' in further public deployments. For a detente in the arms race. For some social breathing space to figure this stuff out. Because as societies and individuals we've barely caught up with what last year brought. We are woefully underprepared for the next.
__________________
Virtual Reality chatter on a movie site? Got endless amounts of it here. Reviews over here



I'm lazy so I asked a bot to reply to your post:

I can see that you have done some research on the topic and have some valid points.

AI regulation is a complex and controversial issue that has many pros and cons.

Some of the pros of regulating AI are:
  • It can ensure that AI systems are safe, ethical, transparent, and accountable to humans.
  • It can prevent or mitigate the potential harms of AI to society, such as privacy violations, discrimination, bias, manipulation, and misuse.
  • It can foster public trust and confidence in AI and its benefits.
  • It can promote innovation and competitiveness by setting clear and consistent standards and rules for AI development and deployment.

Some of the cons of regulating AI are:
  • It can stifle innovation and creativity by imposing excessive or rigid restrictions on AI research and applications.
  • It can create barriers to entry and competition for smaller or emerging players in the AI field.
  • It can create regulatory fragmentation and inconsistency across different countries or regions, leading to confusion and inefficiency.
  • It can be difficult to enforce and monitor compliance with AI regulations, especially given the complexity and dynamism of AI systems.

The debate on AI regulation is ongoing and evolving, as different stakeholders have different perspectives and interests. For example, the EU has proposed a comprehensive legal framework for AI regulation that aims to ensure a human-centric and ethical approach to AI. However, some critics argue that the EU proposal is too restrictive or vague, while others argue that it is not enough to address the challenges of AI. Similarly, according to search results, some U.S. senators and Sam Altman, the CEO of OpenAI, have expressed their views on how to regulate AI in a recent hearing. They agreed that some form of regulation is needed, but they differed on the details and scope of such regulation.

As you can see, there is no easy or definitive answer to the question of how to regulate AI. It is a matter of balancing the risks and opportunities of AI, as well as the values and goals of different actors. I think it is important to have an informed and constructive dialogue on this topic, involving various stakeholders from academia, industry, government, civil society, and users. I also think it is important to keep learning from the latest developments and best practices in AI regulation around the world.
__________________
San Franciscan lesbian dwarves and their tomato orgies.



there's a frog in my snake oil
I'm lazy so I asked a bot to reply to your post:
But do you agree with the bot?



I see the question as two pronged:

1) Is there a problem?
2) Can it be stopped?

I'm pretty sanguine on #1. We can get into why if anyone cares. But regardless, I'm pretty confident that the answer to #2 is "no." I'm not aware of any example in human history of any technology of even modest use being stopped in any form.

I assume the response will be "yes, but we can slow it down and get a handle on it." Maybe. But to be clear, that's the "best" case for anyone concerned about this. I think anyone convince that this is a problem needs to adopt a delay, rather than a prevention, strategy, because the latter isn't realistic.



there's a frog in my snake oil
I'm pretty sanguine on #1. We can get into why if anyone cares.
This is a hard one to engage with without knowing more. (Or without just listing all the above examples and arguments again ).

Can you give a few reasons why you find the above unproblematic?


But regardless, I'm pretty confident that the answer to #2 is "no." I'm not aware of any example in human history of any technology of even modest use being stopped in any form.
The call for a 'pause' cites some good ones: human cloning, human germline modification, gain-of-function research, and eugenics.

Now almost all of the above have had some slippage, but on the whole they've had the brakes pretty thoroughly applied.

The distinction here is that LLMs have already made their positive sides felt widely, and adoption is already fairly broad.


I assume the response will be "yes, but we can slow it down and get a handle on it." Maybe. But to be clear, that's the "best" case for anyone concerned about this. I think anyone convince that this is a problem needs to adopt a delay, rather than a prevention, strategy, because the latter isn't realistic.
Oh yep, a delay is all that's feasible I'd say, and only a partial one at that. (Hence the 'genie is out of the bottle' phrasing in the sum up ).



_________________________ _________________________
I enlisted a goonie-goo-goo to craft this response.

Every industry has taken years if not a decade or more to be regulated. Once AI reaches a certain point, it stands little chance of being regulated.

I’m hopeful of the healthcare it will revolutionize and try not to think military supremacy will drive it no matter what “agreements” are made.



there's a frog in my snake oil
Every industry has taken years if not a decade or more to be regulated. Once AI reaches a certain point, it stands little chance of being regulated.
I imagine there'll be regulations along the way. Many of them. Some are already being drummed up, such as these moves by the EU.

Given that regulation is always slower out of the blocks they're especially likely to remain under-powered in these cases though...




I’m hopeful of the healthcare it will revolutionize and try not to think military supremacy will drive it no matter what “agreements” are made.
Yeah dawdling in the dazzle is definitely the bright side



AI Developers Want To Be Regulated. And With Good Reason.
...

Too Long, Got a Bot to Summarise:

The genie is out of the bottle. We should enjoy the magic it can perform, and have a play. But personally I'm all for a 'pause' in further public deployments. For a detente in the arms race. For some social breathing space to figure this stuff out. Because as societies and individuals we've barely caught up with what last year brought. We are woefully underprepared for the next.
Very good summary of the current AI circumstance, which I view as a bona fide menace.

You're right that it will be almost impossible to stuff the Genie back into the bottle, but IMO Artificial Intelligence ought to be severely restricted. The surveillance state is already out of control, so AI would take it to an unimaginable level.



there's a frog in my snake oil
Very good summary of the current AI circumstance, which I view as a bona fide menace.

You're right that it will be almost impossible to stuff the Genie back into the bottle, but IMO Artificial Intelligence ought to be severely restricted. The surveillance state is already out of control, so AI would take it to an unimaginable level.
Cheers!

It's interesting for me to be on this side of the coin, as on surveillance I've always been more on the wary than warlike end of the scale. But as with many areas of civil rights, LLMs are about to amp up some negative possibilities.

It's notable that the red lines the EU are drafting focus very much on those kinds of areas, with the top line being prohibition of AI for: "biometric surveillance, emotion recognition, predictive policing".

I think they'll be lucky on all fronts, but it's an interesting battle line.

I suspect the 'AI Dilemma' vid is right when they suggest that 'Friend Bot' style tools, and subsequent analysis of fine-grained emotional content and influence of the same, will be a major initial use-case and business model. I think we're quite used to 'being the product' in free services like that, and to date they've been relatively benign re tailoring advertising etc. But this one does have the feel to me of a quiet road towards some more potent and pernicious norms. (Initially hooked to the AI becoming experts in targeted persuasion, for example).



This is a hard one to engage with without knowing more. (Or without just listing all the above examples and arguments again ).

Can you give a few reasons why you find the above unproblematic?
Well, it depends on what "the above" encompasses. I'm sanguine on the idea of a self-improving AI that poses an existential threat to humanity.

The other stuff, more mixed, though in most cases I tend to think of it as something, like deepfakes, that will just inflict some growing pains on society that we'll eventually incorporate reasonably well into our model of the world, in the same way we're sorta just figuring out now that we shouldn't be in front of screens or spend all day on social media.

The call for a 'pause' cites some good ones: human cloning, human germline modification, gain-of-function research, and eugenics.

Now almost all of the above have had some slippage, but on the whole they've had the brakes pretty thoroughly applied.

The distinction here is that LLMs have already made their positive sides felt widely, and adoption is already fairly broad.
Aye, that and the barriers to entry are, in some ways, a lot lower. It's pretty hard to setup a cloning lab in your garage. I know AI can't just run on a laptop, or whatever, but that, too, seems like a matter of time. I wouldn't agree entirely about eugenics, either, but it's not worth getting into. The main thing is I think the examples above have not really stopped the technology as a whole, but just very specific applications of it. And we'll see if they hold, since this is the kind of thing measured in centuries.

Oh yep, a delay is all that's feasible I'd say, and only a partial one at that. (Hence the 'genie is out of the bottle' phrasing in the sum up ).
Yeah, I figured. I confusingly responded to the topic in general and it probably sounded like I was responding to just you, specifically. I kind of just expect this one's gonna be all over the place with lots of different opinions from lots of different people.



there's a frog in my snake oil
Well, it depends on what "the above" encompasses. I'm sanguine on the idea of a self-improving AI that poses an existential threat to humanity.
Oh yeah, I'm sanguine on that too for now. And hey, OpenAI just put a ~10yr time-frame on a 'superintelligence' outstripping us, so plenty of time to debate that . (And the IAEA style regulatory body they think should be erected in advance).

(The flutterings of 'Artificial General Intelligence' emerging in the latest public models are intriguing in their own right though).


The other stuff, more mixed, though in most cases I tend to think of it as something, like deepfakes, that will just inflict some growing pains on society that we'll eventually incorporate reasonably well into our model of the world, in the same way we're sorta just figuring out now that we shouldn't be in front of screens or spend all day on social media.
I agree with the principle that we'll adjust to deepfakes just as we adjusted to Photoshop etc. I think the issue for me is that each of the things listed above won't happen in isolation. It's very possible there'll be a run of shocks to our social systems in a relatively short time-frame. (Due in part to the current speed of development, and in part to the way these new discoveries can amplify each other).

So this would be something society can absorb and adapt to:
  • Convincing political deepfakes being disseminated by convincing synthetic agents online.

But now imagine that happens over a 5 year period where:
  • Opportunities for criminality have been expanded. (The layman hacking, deepfake scams, 'bedroom chemistry' of the OP etc)

The repercussions start to stack.

And those would just be some of the current LLM capabilities & outputs being adopted more widely. In the background there would be more coming to fruition. (Say the non-invasive mind-reading tech referenced in the OP, etc etc).


Aye, that and the barriers to entry are, in some ways, a lot lower. It's pretty hard to setup a cloning lab in your garage. I know AI can't just run on a laptop, or whatever, but that, too, seems like a matter of time. I wouldn't agree entirely about eugenics, either, but it's not worth getting into. The main thing is I think the examples above have not really stopped the technology as a whole, but just very specific applications of it. And we'll see if they hold, since this is the kind of thing measured in centuries.
Yep it's a fair point that expressions of core techs have been limited there, in the main, rather than the technologies themselves. And that may be the way forward with LLMs etc too.

But it may be easier said than done. Part of the magic sauce in the likes of GPT-4 is the sheer black box enormity of its training set, and the resulting spread of potential outputs. It's hard to know how it arrives at its answers, and hard to prune back unwanted outputs entirely. (As demonstrated by the ease with which people have bypassed the 'alignment' training which prevents it giving illegal advice etc).

So even if certain dedicated technologies could be roadblocked (say: the 'mind reading' use, based on brain scan data etc), it seems possible that comparable abilities could arise in unrelated models. (IE in the sense that GPT-4 has outperformed dedicated chemistry LLMs, and demonstrated visual and spatial competence despite only being trained on words etc.)

Some of this may change if the 'Sparse Expert' models prove robust. IE they're more interrogable in theory, and more fine-tunable regarding their discrete collection of 'expertises'. (And also much lower on compute costs, so getting closer to the 'bedroom LLM' that you mention).

TLDR: All of this speaks to your point that the tech can't be stopped long-term. But also to mine that perhaps we should slow its roll a touch