Big fan of your SubStack in general; this *particular* post (esp. the title, which is false on its face, when you think about it) is, I think, pernicious, though. For me, bottom-line: Both things can be true. Regulations, therefore, should address both, esp since most regulations (forcing transparency, eg) that can address one can addresses the other. It's really that simple.
Broaden the discussion beyond AI to all issues, for example, and we'd never do anything about calamitous climate change—since there's *always* going to be something more pressing for us to focus on in the more immediate time-frame. It's a dead end argument, imho.
On this point, I think we are going to agree to disagree. I simply do not believe, given what I know of how these systems are created and work, that a runaway AGI is a realistic possibility in anything like a realistic timeframe. Unlike catastrophic climate change, for example, the mechanism for this is not there.
Given that, and given that these calls seem to almost always involve/pushed by people with a vested interest in ensuring the regulation is focused on the far away and not the here and now, I think it important to drag the conversation back to the here and now.
Now, I likely could have made this point better, and I could be wrong, of course. Not everyone worried about this is worried about it because their salary depends upon being worried about it, to mangle the old saying. But that is my general feeling: I think the hype has gotten way, way ahead of the realistic possibilities here and as a result, it's too easy to lose sight of how these systems do real damage today and deserve to be much more tightly controlled now.
Again, thanks for the kind words and the interesting conversation.
Thank *you* for the conversation! I see your points; and of course you and I would likely somewhere-near-100% agree on the nuts of bolts of what needs to happen—mostly, we seem to disagree on the efficacy of the rhetoric.
We had a dry run of all this with Social Media algorithms, and just I wish more people had their hair on fire back in 2010s about that. You'll recall that all rhetoric was positive about 'social networks' for a long time. Maybe if a few more people had been suggesting that another addictive technology that no one was testing was a bad idea, we could have avoided a lot of really poor outcomes (2 genocides, skyrocketing youth mental health problems, democracies undermined) in that space, too?
I agree with that completely. And to be fair, that might be driving some of my reaction now, that and the crypto bubble. Too much deference to "innovation" and too little scrutiny has been a disaster for most normal people for the last decade or so when it comes to technology.
Big fan of your SubStack in general; this *particular* post (esp. the title, which is false on its face, when you think about it) is, I think, pernicious, though. For me, bottom-line: Both things can be true. Regulations, therefore, should address both, esp since most regulations (forcing transparency, eg) that can address one can addresses the other. It's really that simple.
Broaden the discussion beyond AI to all issues, for example, and we'd never do anything about calamitous climate change—since there's *always* going to be something more pressing for us to focus on in the more immediate time-frame. It's a dead end argument, imho.
First, thank you very much for the kind words.
On this point, I think we are going to agree to disagree. I simply do not believe, given what I know of how these systems are created and work, that a runaway AGI is a realistic possibility in anything like a realistic timeframe. Unlike catastrophic climate change, for example, the mechanism for this is not there.
Given that, and given that these calls seem to almost always involve/pushed by people with a vested interest in ensuring the regulation is focused on the far away and not the here and now, I think it important to drag the conversation back to the here and now.
Now, I likely could have made this point better, and I could be wrong, of course. Not everyone worried about this is worried about it because their salary depends upon being worried about it, to mangle the old saying. But that is my general feeling: I think the hype has gotten way, way ahead of the realistic possibilities here and as a result, it's too easy to lose sight of how these systems do real damage today and deserve to be much more tightly controlled now.
Again, thanks for the kind words and the interesting conversation.
Thank *you* for the conversation! I see your points; and of course you and I would likely somewhere-near-100% agree on the nuts of bolts of what needs to happen—mostly, we seem to disagree on the efficacy of the rhetoric.
We had a dry run of all this with Social Media algorithms, and just I wish more people had their hair on fire back in 2010s about that. You'll recall that all rhetoric was positive about 'social networks' for a long time. Maybe if a few more people had been suggesting that another addictive technology that no one was testing was a bad idea, we could have avoided a lot of really poor outcomes (2 genocides, skyrocketing youth mental health problems, democracies undermined) in that space, too?
I agree with that completely. And to be fair, that might be driving some of my reaction now, that and the crypto bubble. Too much deference to "innovation" and too little scrutiny has been a disaster for most normal people for the last decade or so when it comes to technology.