AI Developers Want To Be Regulated. And With Good Reason.
Hey all.
You've probably noticed that various AI luminaries have been calling for everything from regulation of their industry up to a complete pause for new product releases. It's easy to see these calls as an attempt to freeze the marketplace while they're in a dominate position. But I'd say there's more than that going on... --- Mutually Assured Discussion: As the existing court cases over copyright demonstrate, there's a whole new legal world to be explored. And given the current tools contain capabilities that even their creators don't know about (cutely described as 'capability overhang'), it seems reasonable to assume that this legal world is going to get fairly ample fairly quickly. As much as businesses love being the new disruptor, they like the predictability of a settled battlefield too. And my suspicion is: These guys know they're sitting on the most disruptive set of innovations in a helluva long time. Ones that may well run out of their control in various ways, and smash a few more norms than they're comfortable with. And they'd quite like to be prepared... --- Can We Do That Dave?: To delve into why they might feel that way, we'd have to get more technical. But thankfully there's a simple overview of the key negatives at play (which are gliding along somewhat unseen, thanks to the many positive advancements that these technologies are also bringing). It's great, and you should watch it :) https://youtu.be/xoVJKj8lcNQ Unfortunately it's also an hour long, and contains some TED-talk tech-bro framing that may put some people off before they get to the meat. So I'll do my best to summarise the key contentions:
--- A Positive Spin, And Then Back Again:
What I found mind-blowing about the video, at core, was that I've watched a lot of these technologies inch along over the last decade. And seen them start to explode in the last few years. And taken individually, I've always been dazzled by their positive potentials first, and left their negatives very much in the background. But I think stripping away the dazzle, and just noting how the negatives synthesise alone, is instructive in this case. They are equally potent. And ignoring them won't make them go away. --- Too Long, Got a Bot to Summarise: The genie is out of the bottle. We should enjoy the magic it can perform, and have a play. But personally I'm all for a 'pause' in further public deployments. For a detente in the arms race. For some social breathing space to figure this stuff out. Because as societies and individuals we've barely caught up with what last year brought. We are woefully underprepared for the next. |
I'm lazy so I asked a bot to reply to your post:
I can see that you have done some research on the topic and have some valid points. AI regulation is a complex and controversial issue that has many pros and cons. Some of the pros of regulating AI are:
Some of the cons of regulating AI are:
The debate on AI regulation is ongoing and evolving, as different stakeholders have different perspectives and interests. For example, the EU has proposed a comprehensive legal framework for AI regulation that aims to ensure a human-centric and ethical approach to AI. However, some critics argue that the EU proposal is too restrictive or vague, while others argue that it is not enough to address the challenges of AI. Similarly, according to search results, some U.S. senators and Sam Altman, the CEO of OpenAI, have expressed their views on how to regulate AI in a recent hearing. They agreed that some form of regulation is needed, but they differed on the details and scope of such regulation. As you can see, there is no easy or definitive answer to the question of how to regulate AI. It is a matter of balancing the risks and opportunities of AI, as well as the values and goals of different actors. I think it is important to have an informed and constructive dialogue on this topic, involving various stakeholders from academia, industry, government, civil society, and users. I also think it is important to keep learning from the latest developments and best practices in AI regulation around the world. |
Originally Posted by Mr Minio (Post 2389016)
I'm lazy so I asked a bot to reply to your post:
|
What does a bot know? ha!
Aren’t they just trying to make us feel comfortable till they find an effective way to replace us? ;) |
Re: AI Developers Want To Be Regulated. And With Good Reason.
I see the question as two pronged:
1) Is there a problem? 2) Can it be stopped? I'm pretty sanguine on #1. We can get into why if anyone cares. But regardless, I'm pretty confident that the answer to #2 is "no." I'm not aware of any example in human history of any technology of even modest use being stopped in any form. I assume the response will be "yes, but we can slow it down and get a handle on it." Maybe. But to be clear, that's the "best" case for anyone concerned about this. I think anyone convince that this is a problem needs to adopt a delay, rather than a prevention, strategy, because the latter isn't realistic. |
Originally Posted by Yoda (Post 2389043)
I'm pretty sanguine on #1. We can get into why if anyone cares.
Can you give a few reasons why you find the above unproblematic?
Originally Posted by Yoda (Post 2389043)
But regardless, I'm pretty confident that the answer to #2 is "no." I'm not aware of any example in human history of any technology of even modest use being stopped in any form.
Now almost all of the above have had some slippage, but on the whole they've had the brakes pretty thoroughly applied. The distinction here is that LLMs have already made their positive sides felt widely, and adoption is already fairly broad.
Originally Posted by Yoda (Post 2389043)
I assume the response will be "yes, but we can slow it down and get a handle on it." Maybe. But to be clear, that's the "best" case for anyone concerned about this. I think anyone convince that this is a problem needs to adopt a delay, rather than a prevention, strategy, because the latter isn't realistic.
|
I enlisted a goonie-goo-goo to craft this response.
Every industry has taken years if not a decade or more to be regulated. Once AI reaches a certain point, it stands little chance of being regulated. I’m hopeful of the healthcare it will revolutionize and try not to think military supremacy will drive it no matter what “agreements” are made. |
Originally Posted by doubledenim (Post 2389067)
Every industry has taken years if not a decade or more to be regulated. Once AI reaches a certain point, it stands little chance of being regulated.
Given that regulation is always slower out of the blocks they're especially likely to remain under-powered in these cases though... https://64.media.tumblr.com/cad4eab4...89ndo9_400.gif
Originally Posted by doubledenim (Post 2389067)
I’m hopeful of the healthcare it will revolutionize and try not to think military supremacy will drive it no matter what “agreements” are made.
|
Originally Posted by Golgot (Post 2388880)
AI Developers Want To Be Regulated. And With Good Reason.
... Too Long, Got a Bot to Summarise: The genie is out of the bottle. We should enjoy the magic it can perform, and have a play. But personally I'm all for a 'pause' in further public deployments. For a detente in the arms race. For some social breathing space to figure this stuff out. Because as societies and individuals we've barely caught up with what last year brought. We are woefully underprepared for the next. You're right that it will be almost impossible to stuff the Genie back into the bottle, but IMO Artificial Intelligence ought to be severely restricted. The surveillance state is already out of control, so AI would take it to an unimaginable level. |
Originally Posted by GulfportDoc (Post 2389110)
Very good summary of the current AI circumstance, which I view as a bona fide menace.
You're right that it will be almost impossible to stuff the Genie back into the bottle, but IMO Artificial Intelligence ought to be severely restricted. The surveillance state is already out of control, so AI would take it to an unimaginable level. It's interesting for me to be on this side of the coin, as on surveillance I've always been more on the wary than warlike end of the scale. But as with many areas of civil rights, LLMs are about to amp up some negative possibilities. It's notable that the red lines the EU are drafting focus very much on those kinds of areas, with the top line being prohibition of AI for: "biometric surveillance, emotion recognition, predictive policing". I think they'll be lucky on all fronts, but it's an interesting battle line. I suspect the 'AI Dilemma' vid is right when they suggest that 'Friend Bot' style tools, and subsequent analysis of fine-grained emotional content and influence of the same, will be a major initial use-case and business model. I think we're quite used to 'being the product' in free services like that, and to date they've been relatively benign re tailoring advertising etc. But this one does have the feel to me of a quiet road towards some more potent and pernicious norms. (Initially hooked to the AI becoming experts in targeted persuasion, for example). |
Originally Posted by Golgot (Post 2389058)
This is a hard one to engage with without knowing more. (Or without just listing all the above examples and arguments again ;)).
Can you give a few reasons why you find the above unproblematic? The other stuff, more mixed, though in most cases I tend to think of it as something, like deepfakes, that will just inflict some growing pains on society that we'll eventually incorporate reasonably well into our model of the world, in the same way we're sorta just figuring out now that we shouldn't be in front of screens or spend all day on social media.
Originally Posted by Golgot (Post 2389058)
The call for a 'pause' cites some good ones: human cloning, human germline modification, gain-of-function research, and eugenics.
Now almost all of the above have had some slippage, but on the whole they've had the brakes pretty thoroughly applied. The distinction here is that LLMs have already made their positive sides felt widely, and adoption is already fairly broad.
Originally Posted by Golgot (Post 2389058)
Oh yep, a delay is all that's feasible I'd say, and only a partial one at that. (Hence the 'genie is out of the bottle' phrasing in the sum up ;)).
|
Originally Posted by Yoda (Post 2389214)
Well, it depends on what "the above" encompasses. I'm sanguine on the idea of a self-improving AI that poses an existential threat to humanity.
(The flutterings of 'Artificial General Intelligence' emerging in the latest public models are intriguing in their own right though).
Originally Posted by Yoda (Post 2389214)
The other stuff, more mixed, though in most cases I tend to think of it as something, like deepfakes, that will just inflict some growing pains on society that we'll eventually incorporate reasonably well into our model of the world, in the same way we're sorta just figuring out now that we shouldn't be in front of screens or spend all day on social media.
So this would be something society can absorb and adapt to:
But now imagine that happens over a 5 year period where:
The repercussions start to stack. And those would just be some of the current LLM capabilities & outputs being adopted more widely. In the background there would be more coming to fruition. (Say the non-invasive mind-reading tech referenced in the OP, etc etc).
Originally Posted by Yoda (Post 2389214)
Aye, that and the barriers to entry are, in some ways, a lot lower. It's pretty hard to setup a cloning lab in your garage. I know AI can't just run on a laptop, or whatever, but that, too, seems like a matter of time. I wouldn't agree entirely about eugenics, either, but it's not worth getting into. The main thing is I think the examples above have not really stopped the technology as a whole, but just very specific applications of it. And we'll see if they hold, since this is the kind of thing measured in centuries.
But it may be easier said than done. Part of the magic sauce in the likes of GPT-4 is the sheer black box enormity of its training set, and the resulting spread of potential outputs. It's hard to know how it arrives at its answers, and hard to prune back unwanted outputs entirely. (As demonstrated by the ease with which people have bypassed the 'alignment' training which prevents it giving illegal advice etc). So even if certain dedicated technologies could be roadblocked (say: the 'mind reading' use, based on brain scan data etc), it seems possible that comparable abilities could arise in unrelated models. (IE in the sense that GPT-4 has outperformed dedicated chemistry LLMs, and demonstrated visual and spatial competence despite only being trained on words etc.) Some of this may change if the 'Sparse Expert' models prove robust. IE they're more interrogable in theory, and more fine-tunable regarding their discrete collection of 'expertises'. (And also much lower on compute costs, so getting closer to the 'bedroom LLM' that you mention). TLDR: All of this speaks to your point that the tech can't be stopped long-term. But also to mine that perhaps we should slow its roll a touch ;) |
All times are GMT -3. The time now is 10:59 AM. |
Powered by: vBulletin, Copyright, ©2000 - 2024, Jelsoft Enterprises Ltd.
User Alert System provided by
Advanced User Tagging v3.3.0 (Lite) -
vBulletin Mods & Addons Copyright © 2024 DragonByte Technologies Ltd.
Copyright © Movie Forums