We asked ChatGPT if it had a Code of Ethics, and then we looked up OpenAI's mission statement. Here is what we found.
Are you working on your code of
ethics?
ChatGPT
ChatGPT
I don't
have personal experiences, intentions, or the ability to work on projects
independently. However, the development and deployment of AI models, including
those based on GPT-3.5 architecture like mine, involve ethical considerations.
OpenAI, the organization behind my creation, is responsible for establishing
guidelines and ethical frameworks for the use of its models.
As of my last knowledge
update in January 2022, OpenAI has been actively engaged in discussions about
the ethical use of AI and has made efforts to address concerns related to bias,
transparency, and responsible deployment. It's important to check OpenAI's
official communications or website for the latest information on their code of
ethics and any updates they may have made since my last knowledge update.
OpenAI, or OpenAI LP, is an artificial
intelligence research lab consisting of the for-profit OpenAI LP and its
non-profit parent company, OpenAI Inc. It was founded with the goal of
advancing digital intelligence in a way that benefits humanity as a whole.
OpenAI aims to conduct research in artificial general intelligence (AGI) –
highly autonomous systems that outperform humans at most economically valuable
work.
AGI MISSION
Our mission is to ensure
that artificial general intelligence—AI systems that are generally smarter than
humans—benefits all of humanity.
If AGI is successfully created, this technology could help
us elevate humanity by increasing abundance, turbocharging the global economy,
and aiding in the discovery of new scientific knowledge that changes the limits
of possibility.
AGI has the potential to give everyone incredible new
capabilities; we can imagine a world where all of us have access to help with
almost any cognitive task, providing a great force multiplier for human
ingenuity and creativity.
On the other hand, AGI would also come with serious risk of
misuse, drastic accidents, and societal disruption. Because the upside of AGI
is so great, we do not believe it is possible or desirable for society to stop
its development forever; instead, society and the developers of AGI have to
figure out how to get it right.A
[A]
We seem to have been given
lots of gifts relative to what
we expected earlier: for example, it seems like creating AGI will require huge
amounts of compute and thus the world will know who is working on it, it seems
like the original conception of hyper-evolved RL agents competing with each
other and evolving
intelligence in a way we
can’t really observe is less likely than it originally seemed, almost no one
predicted we’d make this much progress on pre-trained language models that can
learn from the collective preferences and output of humanity, etc.
AGI could happen soon or far in the future; the takeoff
speed from the initial AGI to more powerful successor systems could be slow or
fast. Many of us think the safest quadrant in this two-by-two matrix is short
timelines and slow takeoff speeds; shorter timelines seem more amenable to
coordination and more likely to lead to a slower takeoff due to less of a
compute overhang, and a slower takeoff gives us more time to figure out
empirically how to solve the safety problem and how to adapt.
Although we cannot predict exactly what will happen, and of
course our current progress could hit a wall, we can articulate the principles
we care about most:
1.
We want AGI to empower
humanity to maximally flourish in the universe. We don’t expect the future to
be an unqualified utopia, but we want to maximize the good and minimize the
bad, and for AGI to be an amplifier of humanity.
2.
We want the benefits of,
access to, and governance of AGI to be widely and fairly shared.
3.
We want to successfully
navigate massive risks. In confronting these risks, we acknowledge that what
seems right in theory often plays out more strangely than expected in practice.
We believe we have to continuously learn and adapt by deploying less powerful
versions of the technology in order to minimize “one shot to get it
right” scenarios.
No comments:
Post a Comment