• Find us on Facebook
  • Follow us on Twitter

Old Email Archive

Return to old archive list

digest 1999-02-04 #001.txt


11:30 PM 2/3/99 -0800
From: "Society for Literature & Science" 

Daily SLS Email Digest
-> thesis on AI & cultural theory
by Phoebe Sengers 

-> Re: thesis on AI & cultural theory
by Martin Rosenberg 
-> Re: thesis on AI & cultural theory
by Phoebe Sengers 

-> Re: thesis on AI & cultural theory
by Phoebe Sengers 

-> Re: thesis on AI & cultural theory
by Susan Squier 
----------------------------------------------------------------------
Date: 3 Feb 1999 02:25:25 -0800
From: Phoebe Sengers 

Subject: thesis on AI & cultural theory
Dear SLS-ers,
My thesis integrating Artificial Intelligence and cultural theory is
now
available for your downloading / ordering pleasure at
http://www.cs.cmu.edu/~phoebe/work/thesis.html.  Note: it is huge. 
I've
attached a teaser to give you a feel for what I'm doing.  Enjoy!
Phoebe Sengers
Fulbright Scholar
Center for Art & Media Technology (ZKM)
Karlsruhe, Germany
- ------------------------------------------------------------------
Artificial Intelligence (AI) has come a long way.  Particularly in the
last ten years, the subfield known as `agents' --- artificial creatures
that `live' in physical or virtual environments, capable of engaging in
complex action without human control --- has exploded. We can now build
agents that can do a lot for us: they search for information on the
Web,
trade stocks, play grandmaster-level chess, patrol nuclear reactors,
remove asbestos, and so on.  Agents have come to be powerful tools.
But one of the oldest dreams of AI is the `robot friend', an artificial
being that is not just a tool but has its own life.  Such a creature we
want to talk to, not just to find out the latest stock quotes or the
answer to our database queries, but because we are interested in its
hopes and feelings. Yes, we can build smart, competent, useful
creatures, but we have not built very many that seem complex, robust,
and alive in the way that biological creatures do.  Who wants to be
buddies with a spreadsheet program, no matter how anthropomorphized?
Somehow, in our drive for faster, smarter, more reliable, more useful,
more profitable artificial agents, it seems like we may have lost
something equally important: the dream of a creature which is, on its
own terms, alive.
At the same time, as the notion of `agent' has started to take on pop
culture cachet, outside academics have begun to turn a
not-always-welcome critical eye on the practices of AI.  To humanists
interested in how AI fits into broader culture, both the goals and the
methodologies of AI seem suspect.  With AI funding coming largely from
the military and big business, critics may wonder if AI is just about
building autonomous fighter pilots, more complex voicemail systems, and
robots to replace human workers on assembly lines.  The notion of the
hyperrational, disembodied agent which still drives much AI research
strikes many critics as hopelessly antiquated and even dangerous. AI
research, these critics say, is about reproducing in silicon ideas of
humanity that are hopelessly limited, leaving out much of what we value
in ourselves.  AI, in this view, is bad science and bad news.
These critiques, while not always equally easy for AI researchers to
hear, could potentially help AI researchers develop better technical
practices.  They often focus on what has been left out of AI, helping
us
understand at a deep level why we have not yet achieved the AI dream of
artificial creatures that are meaningfully alive, giving us a glimpse
of
the steps we could take towards fulfilling that dream, and advising us
on integrating the practice of AI responsibly with the rest of life.
Unfortunately, however, while being eloquent additions to such fields
as
anthropology, philosophy, or cultural studies, the critiques have often
been unintelligible to AI researchers themselves.  Lacking the context
and background of humanist critics, researchers often see humanist
concerns as silly or beside the point when compared to their own deep
experiential knowledge of technology.  Similarly, humanist critics have
generally lacked the background (and, often, the motivation) to phrase
their criticisms in ways that speak to the day-to-day technical
practices of AI researchers.  The result is the ghettoization of both
AI
and cultural critique: technical practices continue on their own course
without the benefit of insight humanists could afford, and humanists'
concerns about AI have little effect on how AI is actually done.
The premise of this thesis is that things can be different.  Rather
than
being inherently antagonistic, AI and humanistic studies of AI in
culture can benefit greatly from each other's strengths. Specifically,
by studying AI not only as technology but also as a cultural
phenomenon,
we can find out how our notions of agents spring from and fit into a
broader cultural context.  Reciprocally, if the technology we are
currently building is rooted in culturally-based ways of thinking, then
by introducing new ways of thinking we can build new and possibly
better
kinds of technology.
This insight --- that cultural studies of AI can uncover groundwork for
new technology --- forms the basis of this thesis.  In particular, I
look at methods for constructing artificial creatures that combine many
forms of complex behavior.  I analyze the technical state of the art
with a cultural studies of science perspective to discover the
limitations that AI has unknowingly placed upon itself through its
current methodological presuppositions.  I use this understanding to
develop a new methodological foundation for an AI that can combine both
humanistic and engineering perspectives.  Finally, I leverage these
insights in the development of agent technology, in order to generate
agents that can integrate many behaviors while maintaining intentional
coherence in their observable activity; or, colloquially speaking,
appear more *alive*.
----------------------------------------------------------------------
Date: 3 Feb 1999 03:18:57 -0800
From: Martin Rosenberg 
Subject: Re: thesis on AI & cultural theory
Phoebe:
Thanks much for your generosity.  I will be sure to download the file.
Congratulations and best wishes......mer
On Wed, 3 Feb 1999, Phoebe Sengers wrote:
> Dear SLS-ers,
>
> My thesis integrating Artificial Intelligence and cultural theory
is now
> available for your downloading / ordering pleasure at
> http://www.cs.cmu.edu/~phoebe/work/thesis.html.  Note: it is huge. 
I've
> attached a teaser to give you a feel for what I'm doing.  Enjoy!
>
> Phoebe Sengers
> Fulbright Scholar
> Center for Art & Media Technology (ZKM)
> Karlsruhe, Germany
>
> ------------------------------------------------------------------
> Artificial Intelligence (AI) has come a long way.  Particularly in
the
> last ten years, the subfield known as `agents' --- artificial
creatures
> that `live' in physical or virtual environments, capable of
engaging in
> complex action without human control --- has exploded. We can now
build
> agents that can do a lot for us: they search for information on the
Web,
> trade stocks, play grandmaster-level chess, patrol nuclear
reactors,
> remove asbestos, and so on.  Agents have come to be powerful
tools.
>
> But one of the oldest dreams of AI is the `robot friend', an
artificial
> being that is not just a tool but has its own life.  Such a
creature we
> want to talk to, not just to find out the latest stock quotes or
the
> answer to our database queries, but because we are interested in
its
> hopes and feelings. Yes, we can build smart, competent, useful
> creatures, but we have not built very many that seem complex,
robust,
> and alive in the way that biological creatures do.  Who wants to
be
> buddies with a spreadsheet program, no matter how
anthropomorphized?
> Somehow, in our drive for faster, smarter, more reliable, more
useful,
> more profitable artificial agents, it seems like we may have lost
> something equally important: the dream of a creature which is, on
its
> own terms, alive.
>
> At the same time, as the notion of `agent' has started to take on
pop
> culture cachet, outside academics have begun to turn a
> not-always-welcome critical eye on the practices of AI.  To
humanists
> interested in how AI fits into broader culture, both the goals and
the
> methodologies of AI seem suspect.  With AI funding coming largely
from
> the military and big business, critics may wonder if AI is just
about
> building autonomous fighter pilots, more complex voicemail systems,
and
> robots to replace human workers on assembly lines.  The notion of
the
> hyperrational, disembodied agent which still drives much AI
research
> strikes many critics as hopelessly antiquated and even dangerous.
AI
> research, these critics say, is about reproducing in silicon ideas
of
> humanity that are hopelessly limited, leaving out much of what we
value
> in ourselves.  AI, in this view, is bad science and bad news.
>
> These critiques, while not always equally easy for AI researchers
to
> hear, could potentially help AI researchers develop better
technical
> practices.  They often focus on what has been left out of AI,
helping us
> understand at a deep level why we have not yet achieved the AI
dream of
> artificial creatures that are meaningfully alive, giving us a
glimpse of
> the steps we could take towards fulfilling that dream, and advising
us
> on integrating the practice of AI responsibly with the rest of
life.
> Unfortunately, however, while being eloquent additions to such
fields as
> anthropology, philosophy, or cultural studies, the critiques have
often
> been unintelligible to AI researchers themselves.  Lacking the
context
> and background of humanist critics, researchers often see humanist
> concerns as silly or beside the point when compared to their own
deep
> experiential knowledge of technology.  Similarly, humanist critics
have
> generally lacked the background (and, often, the motivation) to
phrase
> their criticisms in ways that speak to the day-to-day technical
> practices of AI researchers.  The result is the ghettoization of
both AI
> and cultural critique: technical practices continue on their own
course
> without the benefit of insight humanists could afford, and
humanists'
> concerns about AI have little effect on how AI is actually done.
>
> The premise of this thesis is that things can be different.  Rather
than
> being inherently antagonistic, AI and humanistic studies of AI in
> culture can benefit greatly from each other's strengths.
Specifically,
> by studying AI not only as technology but also as a cultural
phenomenon,
> we can find out how our notions of agents spring from and fit into
a
> broader cultural context.  Reciprocally, if the technology we are
> currently building is rooted in culturally-based ways of thinking,
then
> by introducing new ways of thinking we can build new and possibly
better
> kinds of technology.
>
> This insight --- that cultural studies of AI can uncover groundwork
for
> new technology --- forms the basis of this thesis.  In particular,
I
> look at methods for constructing artificial creatures that combine
many
> forms of complex behavior.  I analyze the technical state of the
art
> with a cultural studies of science perspective to discover the
> limitations that AI has unknowingly placed upon itself through its
> current methodological presuppositions.  I use this understanding
to
> develop a new methodological foundation for an AI that can combine
both
> humanistic and engineering perspectives.  Finally, I leverage
these
> insights in the development of agent technology, in order to
generate
> agents that can integrate many behaviors while maintaining
intentional
> coherence in their observable activity; or, colloquially speaking,
> appear more *alive*.
>
>
Martin E. Rosenberg
mrosenbe@kettering.edu
emazurmrosen@earthlink.net
Assistant Professor of Communication
Business and Industrial Management Dept.
Kettering University
1700 W. Third Ave  Flint, MI 48504-4898
810-762-7968 (O) 810-606-0044 (H)
----------------------------------------------------------------------
Date: 3 Feb 1999 04:15:44 -0800
From: Phoebe Sengers 

Subject: Re: thesis on AI & cultural theory
Generosity, my foot!  More like shameless self-promotion, I'd say :).
Thanks for the note, hope you are doing well.  Me, I am just really
happy to be done!
Phoebe
----------------------------------------------------------------------
Date: 3 Feb 1999 04:22:37 -0800
From: Phoebe Sengers 

Subject: Re: thesis on AI & cultural theory
Oops, I apologize for that completely context-free email sent to the
whole list!!
Phoebe
----------------------------------------------------------------------
Date: 3 Feb 1999 07:36:39 -0800
From: Susan Squier 
Subject: Re: thesis on AI & cultural theory
Phoebe:  Nope, I liked it, & I am very eager to read your thesis
too.
Congratulations on being finished; what are your plans now??
Susan Squier
At 04:19 AM 2/3/99 -0800, you wrote:
>Oops, I apologize for that completely context-free email sent to
the
>whole list!!
>
>Phoebe
>
Susan Squier
Brill Professor of Women's Studies and English
2 Burrowes Building
Penn State University
814-863-9582
Home phone: 814-466-7626
sxs62@psu.edu