
Duplicate of https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence with user-submitted answers. An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.
What exactly is the plan to resolve the multiple non-contradictory resolution criteria? Will there be some kind of "weighted according to my gut feeling of how important they are"? Will they all resolve "yes"? Or is it "I will pick the one that was most centrally true"?
It would be nice if there was some kind of flow-chart for resolution like in my "if AI causes human extinction" market.
I don't seem to have the ability to resolve the current answers N/A, and would hesitate to resolve "No" under the circumstances unless a mod okays that.
Unfortunately this is a dependent multiple choice market, so all options have to resolve (summing to 100% or N/A) at the same time. So it's not a question of whether that's ok with mods, it simply isn't possible given the market structure.
It's a not uncommon issue that popular dependent MC markets get many unwanted answers added. It would be great if there were better tools to control this, but unfortunately the options are pretty blunt. My personal recommendation (but totally up to you) would be to change the market settings so that only the creator can add answers---then, people can make suggestions in the comments, and you can choose whether to include them or not. (I can make that change to the settings if you'd prefer, but it's under the 3 dots for more market options).
You can also feel free to edit any unwanted answers to just say "N/A" or "Ignore" or etc, to partially clean up the market (& clarify where attention should go). That's very much within your right as creator. But there's no way to actually remove the options (or resolve them early, although they will quickly go to ~0% with natural betting).
@EliezerYudkowsky If it's not too much of a hassle, would you also consider making an unlinked version of this market with the most promising options copied over, so that the non mutually exclusive options don't distort each others' probabilities? I know I could do this myself if necessary but your influence brings vastly more attention to the market and this seems like a fairly important market question. Maybe the wording would need to be very slightly altered to "...what will be true of the reason?"
@Krantz This was too long to fit.
Enough people understand that we can control a decentralize GOFAI by using a decentralized constitution that is embedded into a free and open market that sovereign individuals can earn a living by aligning. Peace and sanity is achieved game theoretically by making the decentralized process that interpretably advances alignment the same process we use to create new decentralized money. We create an economy that properly rewards the production of valuable alignment data and it feels a lot like a school that pays people to check each other's homework. It is a mechanism that empowers people to earn a living by doing alignment work decentrally in the public domain. This enables us to learn the second bitter lesson: "We needed to be collecting a particular class of data, specifically confidence and attention intervals for propositions (and logical connections of propositions) within a constitution.".
If we radically accelerated the collection of this data by incentivizing it's growth monetarily in a way that empowers poor people to become deeply educated, we might just survive this.
@LoganZoellner Maybe you should correct the market. I've got plenty of limit orders to be filled.
@LoganZoellner personally I would actually support total N/A at this point given the nonsensical nature of a linked market with non mutually exclusive options, it makes the site look bad being promoted so high on the home page
@Krantz
>Maybe you should correct the market. I've got plenty of limit orders to be filled.
Given this market appears completely nonsensical, I have absolutely 0 faith that my ability to stay liquid will outlast this market's ability to be irrational.
I have had bad luck in the past with investing in markets where the outcome criteria was basically "the author will choose one of these at random at a future date".
Also, note that this market isn't monetized, so even though I'm 99.9999999999% sure that neither of those options will resolve positively, there isn't actually any way for me to profit off that information.
A friend made a two video series about it, he is pretty smart and convinced me that AI fear is kind of misguided
@AlexeiTurchin That and trade offs. Like if AI A is really good in task x it will suck shit at task y. That’s why alpha go kill’s LLMs every time at go
@Krantz If anyone is willing to cheerfully and charitablity explain their position on this, I'd like to pay you here:
https://manifold.markets/Krantz/who-will-successfully-convince-kran?r=S3JhbnR6
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
This is really hard, but it's boundedly hard. There's plenty of times we Did the Thing, Whoops (leaded gasoline, WW2, social media) but there's also some precedent for the top tiny percent of humans, coming together to Not Do the Thing or Only Slightly Do the Thing. (Nuclear war, engineered smallpox, human-animal hybrids, project Sundial)
Its easy to underestimate the impact of individuals deciding to not push capabilities, but consider voting: rationally completely impotent, and yet practically it completely decides the outcome.
There was a LessWrong article listing different sampling assumptions in anthropics, one of which was the Super-Strong Self-Sampling Assumption (SSSSA): I am randomly selected from all observer-moments relative to their intelligence/size. This would explain why I'm a human rather than an ant. However, since I don’t find myself as a superintelligence, this may be evidence that conscious superintelligence is rare. Alternatively, it could imply that "I" will inevitably become a superintelligence, which could be considered an okay outcome.
@Phi I think this lines up well with a segment in
https://knightcolumbia.org/content/ai-as-normal-technology
"Market success has been strongly correlated with safety... Poorly controlled AI will be too error prone to make business sense"
How about this: AI is collective intelligence of Earth civilization, with both humans and machines being vital parts of it. As the AI will evolve humans will become more heavily augmented and genetically modified, definition of "human" will stretch, humanity will evolve as subsystem of the AI
P.S. And maybe it already exist and it is busy to reorganize our world ;)
they are will be relatively humanlike
they will be "computerish"
[they will be] generally weakly motivated compared to humans
These are 3 very different possibilities, why are they all listed in the same option as if they're all true at once?
Slightly off topic but I find very amusing the mental image of blankspace737 / stardust / Tsar Nicholas coming across this market, seeing the "god is real" option at 0% and repeatedly trying, with growing desperation, to bet it up for infinite winnings, only to be met each time with the server error message
Even in good cases, 20% of max attainable CEV seems unlikely. I expect that outcomes are extremely heavy-tailed such that even if alignment is basically solved, we rarely get anything close to 20% of maximum. There’s a lot of room at the top. May also be that maximum is unbounded!
not to pathologize, and I could very well be projecting, but seeing krantz's writing for the first time took me back Ratatouille-style to the types of ideas my hypomanic episodes used to revolve around before I knew how to ground myself. it's not I think that any of it is delusional, but from what I've seen of the general frantic vibe of his writing and of his responses to criticism it seems like he has such deeply held idealistic notions of what the future holds that it starts to get (in my experience, at least) difficult and almost painful to directly engage with and account for others' thoughts on your ideas at anything more than a superficial level if they threaten to disrupt said notions. maybe that's normal, idk
Here are the steps.
Step 1. Eliezer (or anyone with actual influence in the community) listens to Krantz.
https://manifold.markets/Krantz/if-eliezer-charitably-reviewed-my-w?r=S3JhbnR6
https://manifold.markets/Krantz/this-is-a-solution-to-alignment?r=S3JhbnR6
Step 2. Krantz shows Eliezer how simple proposition constitutions can be used to interpretably align other constitutions and earn points in the process thus providing an option to pivot away from machine learning and back into gofai.
Step 3. We create a decentralized mechanism where people maintain their own constitutions privately with the intention that they will be used to compete over aligning a general constitution to earn 'points' (Turning 'alignment work' into someone everyone can earn crypto for doing).
Step 4. Every person understands that the primary mechanisms for getting an education, hearing the important news and voting on the social contract is through maintaining a detailed constitution of the truth.
Step 5. Our economy is transformed into a competition for who can have the most extensive, accepted and beneficial constitution for aligning the truth.
https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6