i work with advanced technology for pure scientific research. i perform no tests on humans and, other than collaboratively contributing a wee bit to our universal knowledge base, what i do right now doesnt affect anyone in their day to day lives. i wonder what scientists and engineers think about when developing robotic technology that is used to alter and save lives... or when they create robots that mimic our skills, but lack our consciousness. is the development of robotic technology inevitable? are there any kinds of robots that shouldnt be created (because they would take away jobs? because they could cause harm to humans?)? can conscious robots exist? if so, what makes a human a human and a robot a robot? consciousness and free will seem to me to be the phenomena most difficult to duplicate. science tells me that my body and brain are complicated collections of atoms that work together thru inevitable chemical reactions that i have seemingly no control over. but... i *can* contemplate the universe (as a career!) and decide what i want to do when i grow up and whether or not to commit a crime or drink a beer or where to go on vacation or anything. how does that happen? to the best of my knowledge, no one has figured that out, so how could this ability possibly be implemented in a robot? and if robots develop consciousness or relative independence, who decides the proper ethical behavior toward robots?
isaac asimov introduced three laws of robotics in his 1942 short story, runaround:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
here's a video about the rovers from discovery news:
you can see the entire NASA-created animation of the rover from take off to landing on mars here.
No comments:
Post a Comment