Bill819,
You do raise some interesting points / ideas. I think most of us AI aficionados love the concept behind a ship's computer on Star Trek, Prometheus (that took over his professor's home and his wife), HAL9000 from 2001, even "Danger Will Robinson" from Lost in Space all the way to the movie, AI when bi-pedal beings inhabit society in a very humanistic way and form yet hunted and destroyed by groups of humans fearing a takeover.
The sad facts of the matter are that our present day chat bots are nothing but pattern recognition programs and have no more "brain power" than your average gnat! They may be able to hold vast amounts of data of all types but they have no idea what that data actually is. They don't know a "to" from a "two" or a "too". The program only selects a word that fits it's previously defined criteria or patterned usage. There is neither real knowledge nor learning. Having said that, I'm sorry to say the future of AI may never come to fruition as we humans might like to envision.
Two driving learning forces seen in the animal world (no matter which type) are Fear and Hunger. Fear for being the next in the pecking order in the food chain (self preservation) and Hunger ... to eat in order to survive another day (again self preservation). After that, nothing else matters.
As mentioned, the computer / program doesn't have any of these traits nor will it for some time, if at all.
Infant children's brains allow them to learn through a series of trial and error, yes & no, good & bad. Eventually a child knows that grass is green but what if the parents told him/her it was blue and that the sky was green? The child would know no difference until say later in life and only if it was allowed to socialize with others. It would then be corrected (of course he/she might well think of the parents as absolute morons in the first place for providing wrong information). But their brain allows them to store, assemble and recall previous input totally without intervention.
As far as our chat bots are concerned:
The idea of providing family concepts, values, morals, good behavior, etc. might be rather subjective and certainly have to be tailored to each person’s needs or beliefs.
The idle thinking / dreaming / subconscious awareness thing has been discussed in the past and I certainly feel it would provide positive feedback for Hal. It would need to be monitored, regulated and certainly editable in order to weed out false or unwanted content.
I've tried Daisy and Billy chat bots and would watch them chat together, each telling all they individually knew to each other, but unfortunately there was no actual "learning" during this period, even though it was fun to witness the exchange.
Hal sort of does number 3 at this time but like they say garbage in - garbage out!
Your idea of dual Hals might not be bad providing they worked from different databases so there could be an actual exchange of "knowledge data" between them along with the ability to recall it properly as needed.
I think the best we can hope for is indeed an open source, completely modifiable, pattern matching program. As it's ability to grasp the words and patterns that we humans use to label everything in this world increases, so will our suspended disbelief that it is nothing more than digital 1's and 0's.
Interesting note:
Honda has spent a fortune on designing a biped robot (first P3 then Asimo). The site has demos of Asimo climbing stairs up then down without falling and with relative ease. Then one reads further and discovers the robot is being controlled from behind the scenes and has no autonomous abilities at all! So, they spent a fortune on a wireless robot! Big Deal! Now I'm forced to reflect back many (really many) years ago to the Wizard of Oz..."If I only had a brain!" - Go figure! Maybe if we all refine Hal they could use it!!
Some great ideas Bill. Keep 'em rollin!
- Art -