MISSION CRITICAL: #ALIGNEDAI

May 01, 2024 20:12


I was asked to assist with an open source AI project, which contains its share of potential risks & rewards. For the textbook analysis of these parameters, I must reference the work of Prof. Nick Bostrum... for as questionable as his motives may be... I cannot help but wonder who would know better what the dangers could be than someone who knows exactly how to exploit such a system?
Strategic Implications of Openness in AI Development

Before I dive into my usual wall of text/links... I have been asked to check & make sure that you are at least familiar with this paper by Yudkowsky (even though it was obsolete 2 decades ago, it does establish some important concepts and strategies): _Coherent Extrapolated Volition_

And on a meta-related note, my local crew has been wanting to pool resources and share our respective wisdom about the operation of tech-coops... please lmk if you are interested? And while we are at it, perhaps we might be able to discuss how we can further explore these ideas specifically regarding machine learning?

The essence of the OP was that it's inevitable that humans will begin to deify this new technology, so why don't we get ahead of the game & build something beneficial, with a bombastic title similar to (this is an edited title from the OP, to allow them to post thier own original suggestion without my spoilers)

Unfotunately, as much fun as this sounds... until we can generate funding one way or another, the best I can do right now to help this or any machine learning project, is to share what I have written about AI in the past year, FWIW... But even this particular question of how time & money are related is one that underlies the very essence of what any provides, whether of artificial or organic intelligence could possibly boast, be their methods or :

How do we take care of "your tired, your poor, your huddled masses yearning to breathe free, the wretched refuse of your teeming shore" ...and how do we treat the predators? Are they punished & executed or celebrated & rewarded?

I think that this is as crucial a conversation as we could possibly be having, honestly... I would be delighted to hear any feedback you are willing to offer... Becasue this is where we are, currently:



No matter what friendly masks it wears, we should be able to recognize Moloch, by now...


Before I go any further, I must ask if you are familiar with the Slate Star Codex essay _Meditations on Moloch,_ which has become an essential rallying point for understanding our predicament

There are of course already extensive convos out there discussing this specific relationships between the Moloch metaphor, AI misalignment & the meta-crisis
"Moloch is the god of negative sum games-unhealthy competitive situations. It incentivizes players within a game to sacrifice more and more of their values in order to win." - @Liv_Boeree

I recently noticed a post from Rob Brezsny that provides an important context...
"There’s a saying, popularized by Fredric Jameson, that it’s easier to imagine the end of the world than to imagine the end of capitalism. It’s no surprise that Silicon Valley capitalists don’t want to think about capitalism ending. What’s unexpected is that the way they envision the world ending is through a form of unchecked capitalism, disguised as a superintelligent AI. They have unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own." - Ted Chiang

This seemed oddly familiar... And a-ha! I knew i had seen a more recent reference to Ted Chiang's perspective on this subject... in this NYT article by Ezra Klein:
_The Imminent Danger of A.I. Is One We’re Not Talking About_

I also think it's important to differentiate the concurrent problem with the unchecked power of corporations, specifically... Paco Xander Nathan outlines that scenario well, here in this essay, _Corporate Metabolism_ ...published over 20 years ago... & sublation has never been a more crucial concept for us to reckon with, as we face the rise of the fascist state, again!

For a delightful discussion of the currently deplorable state of affairs during late-stage capitalist empire with one foot in the grave (and some ideas for proverbial garlic to keep the actual vampires away)... I highly recommend this spiffy podcast interview with the authors, Cory Doctorow & Rebecca Giblin on Team Human with Douglas Rushkoff (and the books that inmspired it)... I think that you will deeply empathize (if you haven't heard/read about them already)!



https://chokepointcapitalism.com/

And focused towards our particular current concern, this seems pertinent...
Dismantling AI capitalism: the commons as an alternative to the power concentration of Big Tech

In general macroeconomic conversations, I will generally point towards the Nordic model as a more favorable path forward

--

Rather than , I had begun to wonder if would be a slightly less triggering reference? But then, I realized that you may be aiming for maximum Apocalyptic resonance? Or alternatively, maybe or would be cognate terms with less cultural appropriation? I do realize that can already be found running background processes in any multitasking OS; so I know that terms with mystical roots can be repurposed in a cybernetic format. But I do think that we are at a particularly sensitive time for such branding... So should I wonder what your angle is? Perhaps you are actively trying to promote Armageddon, or at least the appearance thereof (even if satirical); which I suppose is not surprising, considering the actual dangers becoming apparent regarding AI development & deployment (whether we will experience any of the numerous severe scenarios 2 years in the future, or 20; we are living in times of dangerous opportunity)? If so, I will admit that I have engaged in deep shadow integration work around clinical death that I refer to as In any case, I get that even bad press can be good press & all; however I honestly think that we are still in territory in which such terminology can be misleading. Yes, there are some spookily profound things being done with LLMs; but don't believe the marketing hype! These machine learning engines understand

#alignedai

Previous post Next post
Up