Fundamental first question -- for a machine to be sentient, it must be constructed in a manner that allows it to self-program, correct?
In thinking of this ... if the "sentience" is really nothing more than an incredibly complex set of fixed responses, computed very rapidly, it's still a finite state machine, correct? What may appear like "emotion" is really a programmed response.
But only by constructing the machine to "learn" -- to develop it's own "neural pathways," to make mistakes, to accidentally kill itself perhaps -- would it ever really be "sentient."
Am I just crazy?
Friday, September 02, 2005
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment