KEV67 Posted 17 hours ago Posted 17 hours ago I am currently about two-thirds through. It is about how to stop superintelligent machines killing us all. Well, maybe not killing us all, maybe just those that oppose them or compete for resources. It has long been a topic of science fiction. I was suprised how far back it has been. Samuel Butler wrote a book called Erewhon in 1872. Erewhon is a country that has banned all mechanical devices after a terrible civil war with them. I read Samul Butler's The Way of All Flesh, which I did not like very much. In the Dune science fiction books, Frank Herbert referred to a Butlerian Jihad, after which AI was banned. In 1847, the editor of a religious magazine, railed against mechanical calculators like the one Charles Babbage was devising. Personally, I have been concerned about AI. What I think is really dangerous is allowing supercomputers to program other supercomputers, because then you have an evolving entity. There is also the big worry that a lot of people will be thrown out of work. Stuart Russell this research into AI will continue, because the economic rewards are so great. He came up with three principles for AI control: to maximise human preferences; to assume initially that it does not know what those preferences are, and to continually observe humans to finetune its knowledge of human preferences. The reason for these principles is that it is very difficult to program an objective into a general purpose AI without the risk of something majorly going wrong. He used the example of King Midas. Being able to turn anything into gold sounded like a good idea at first. An AI might be tasked with curing cancer, which it decides to do by inducing cancer on lots of people. Another phrase he uses is that 'You can't fetch the coffee when you're dead.' An AI might be going to ridiculous lengths to fetch you a cup of coffee, but you can't stop it, because the AI disabled its off switch, because that would stop it completing its mission. It is quite interesting, but not an easy read. It is not very hard, however. AI just seems like one more thing on the list that could kill us all. I took an AI module during my HND around 1988. I used Prolog to encode facts such as 'Dogs chase cats' and 'Rex is a dog'. The program could predict that Rex chases cats. I don't think that sort of AI was ever very useful, but I could be wrong. The other AI programming language was LISP. During my Open University studies, I learnt about neural nets. These are computer programs that you can train. You use part of your real life data to train the neural net, and another part to test it. Neural nets were on the way to AI, but things have progressed greatly since then. I once read a book called 'The Emperor's New Mind' by Roger Penrose, in which he argued that brains were not computers, and probably relied on some properties in the yet-to-be-discovered theory of Quantum Gravity. Roger Penrose persuaded me computers were not brains, but they don't have to be. For instance, there is nothing to say AI has to be conscious for it to take over. Getting back to the book, Stuart Russell says general purpose superintelligent AI is still a long way off. There are still many very difficult problems to solve, but AI is making huge strides. Quote
lunababymoonchild Posted 15 hours ago Posted 15 hours ago I’m already seeing, in the cheap newspapers, reports of AI costing thousands of jobs Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.