What most people don't know is that the US government experimented on him at the age of ~16 when he was a math undergrad at Harvard in project artichoke offshoot MK Ultra.
In his sophomore year at Harvard, Kaczynski participated in a personality assessment study that was conducted by Harvard psychologists and led by Henry Murray. Students in Murray's study were told they would be debating personal philosophy with a fellow student. Instead, they were subjected to "vehement, sweeping, and personally abusive" attacks in a "purposely brutalizing psychological experiment".[28] During the test, students were taken into a room and connected to electrodes that monitored their physiological reactions, while facing bright lights and a one-way mirror. Each student had previously written an essay detailing their personal beliefs and aspirations: the essays were turned over to an anonymous attorney, who would enter the room and individually belittle each student based in part on the disclosures they had made. This was filmed, and students' expressions of impotent rage were played back to them several times later in the study. According to author Alston Chase, Kaczynski's records from that period suggest he was emotionally stable when the study began, and Kaczynski's lawyers attributed his deep-seated hostility towards mind control techniques to his participation in this study.[28] Furthermore, some have suggested that this experience may have been instrumental in Kaczynski's future actions.[29][30]
Dunno about all the mkultra stuff, but they psychologically abused the hell out of a 17 year-old kid.
This is the second time in recent days that Kaczynski's writings have been the subject of HN discussion. Rather than rehearse what I had to say then, I'll just link it: https://aaron-m.com/2017/07/09/on-kaczynski
Taking "technology" to its logical end, you will probably arrive at AGI. And despite being decades off away from it, there are already many people making sure that it will be "safe" since the default otherwise will most likely lead to human extinction.
EDIT: I am getting a lot of downvotes for this comment. And just to clear up intention, all I am trying to say here is that yes, technology could be deleterious but we can prevent it in ways other than concluding “and that’s why we have to start blowing shit up and killing people”. I don't see why that's offensive.
Singularitarianism tends to polarize opinion here, I think, in ways that tend to find stronger expression via the voting buttons than the reply option. I wouldn't worry too hard about it.
I'm not smart enough to dismiss cautions that AGI will lead to human extinction, but many of the arguments I have seen, including Bostrom's, don't adequately explain that AIs are not life, and therefore do not compete for resource the way life does. That is, an AGI will not be alive, but neither can it die. It doesn't have genes, and it will not have evolved to propagate genes competitively.
So, while it's possible that AGI will be as bad or worse than competing with a more powerful and intelligent species of animal, they're not animals, and probably won't behave like animals. Increasing chimp intelligence, on the other hand, is probably not a good idea. Bonobos, maybe. Not chimps.
Also missing from the discussion is that intelligence isn't the only danger from competitors for resources. Insects collectively weigh a multiple of the total human biomass and many of them would eat us all, given half a chance. Microbes could evolve to kill us all, directly, or by modifying the ecosystem.
Part of what we see as "human intelligence" comes from our biological origins. For example, intelligence AND emotions drive our actions: ambition, empathy, fear, greed, jealousy, love, etc. They are survival and reproductive mechanisms built into our brains based on the human condition.
My response to the theory that an all-powerful computer intelligence will one day have us at it's whim is why would it even care? For the same reason it lacks empathy, it would also lack any ambition or fear or jealousy.
The bigger risk is relying too much on A.I. built for very specific purposes (ANI). A contrived example would be putting A.I. in charge of managing the nuclear arsenal. A bug in the A.I. that caused a pre-emptive strike could wipe out all life on earth.
I'm going to stay on the self-promotion train for one more stop, because I actually just a little while back started writing a long piece on a subject very much adjacent to what you're describing: https://aaron-m.com/2017/08/01/on-the-theodicy-of-system-sho...
Part 2, although I know where I intend to take it, is very inchoate at this point for a reason that I mention in a postscript to part 1. I'd love to have any feedback anyone would care to provide! It'll be of great use in improving the back half of the essay.
Indeed - the most likely dangerous kind of AI is the amoral servant; one that decides to massacre humans based not on its own volition but on the orders its has been given by a human.
I know I'm revoking it by commenting here, but I did give you a downvote. It wasn't that you said something ridiculous. It was that you started from Kacyznksi and segued into a topic that's only tangentially related.
I think it's very optimistic to imagine that we're only decades away from inventing God! But I'm also glad there are people taking the time to consider how we might do so without too gravely regrettable a result.
The German director Lutz Dammbeck made an interesting documentary "Das Netz" (the net) about strange parallels in the development of the internet, the counterculture and the terrorist attacks of the Unabomber:
I read his manifesto in middle school because it had just came out in the papers. Previously I thought it was incredibly repetitive to disguise the simplistic ideas. Reading this I change that opinion, he was an angry "loser" with delusions of his own intellect that overcompensates with a lot of words.
Perhaps you should look at how he became what he became, the CIA and the MKULTRA project were probably a huge factor in the creation of his persona. So calling him a loser is probably an unfair assessment.
Calling him a loser also means largely discarding anything he says without any consideration. The fact is, we've done that with large classes of people (in every country, in every time) who feel that the "current" (whatever it is at the time) system or progress is harming them. Eventually, they boil over to become either the majority or mobilized minority army and we have chaos in a country, region, or globally.
This letter speaks rather directly to our contemporary issue of technology displacing non-degreed workers. Either by exporting the jobs (cheaper transport, better logistics), eliminating or reducing the number of people required (increased automation), or increasing the technical sophistication required (again, automation but also increasing presence of computer systems and more complex tooling). It's pretty obvious in the present discussions (and the most recent US presidential election) that people are feeling a great deal of fear and hardship because of our direction over the past 30-40 years (in particular). We have to confront these issues. Otherwise, a lack of consideration or deliberation (publicly) is a tacit endorsement of the current direction and its negative effects on large blocks of our fellow humans and citizens.
Sounds like you refuse to even consider other possibilities. Okay, but there is still the question of his genesis as a terrorist and a murderer. Do you refuse to discuss that too?
There is an interesting documentary about Kaczynski and the philosophy behind him by Lutz Dammbeck called "The Net": https://www.youtube.com/watch?v=xLqrVCi3l6E Featuring a pretty pissed David among others.
If you read his manifesto it goes much deeper into his thought process. There are also quite a few documentaries out there that give more insight into how he gets caught.
There are a lot of people who are definitely not OK. Arguably the majority of people live in a state of at least uncertainty about their future, if not outright fear for their security and safety. This is true at both a local and a global scale.
> Arguably the majority of people live in a state of at least uncertainty about their future, if not outright fear for their security and safety. This is true at both a local and a global scale.
I think that this has always been the case? Do you have evidence that this has grown significantly (and not in a way that has simply been fueled by eyeball-grabbing FUD from news organizations?)
If his fears were rational (based on actual evidence) instead of irrational (based on mere belief), then perhaps he would have had a legitimate point to make... but as it stands, until there's evidence that he was correct, he was just a baseless violent offender
>invasion of privacy (through computers)... environmental degradation through excessive economic growth (computers make an important contribution to economic growth)
Computers also enable absolute privacy via current-gen public/private-key encryption protocols. In fact, this is the first time in history that absolute privacy is actually achievable. This is also the first time in history that I can evade arguably-unethical government controls on my money by using a cryptocurrency. Also enabled by computers. The fact that everyone is willfully crowding themselves into just a handful of un-end-to-end-encrypted for-profit services is just laziness.
> environmental degradation through excessive economic growth (computers make an important contribution to economic growth)
I would agree that computers accelerate economic growth, but they also accelerate solutions to problems created by that economic growth, so I don't see how computers alone can be faulted for this. Also, if my laptop was solar-powered or if I paid for green energy for my home, then my carbon footprint is effectively nil (which reminds me, I need to get on that!)
"Fear of the unknown" is a terrible thing to choose as a motivator, because the unknown will continue to exist for the foreseeable future, we might as well take everything to its logical conclusion and see where that takes us all.
A lot of this seems to just be belief based on an antagonistic worldview ("the world is out to get me unless I'm vigilant and mistrusting") vs. a more positive worldview ("let's just move ahead; we'll figure it out as we go along; I'll trust others until proven otherwise; we'll collectively be fine")
IMHO Kaczynski experienced a traumatic event that changed his worldview to an antagonistic one. His literal trust in the world itself somehow got eroded or poisoned.
...because of computers, it's very, very easy to literally track anyone throughout their day...every location, every activity, every thought they have posted idly, what they like to watch, what they read, etc. I'm sure if the state wished too, they could easily counter encryption if just by forcing you to give the key.
The only reason we are sanguine on the power of tech is that we have the historical luxury of non-predatory government. If the state truly wished to be oppressive, the technology the tech sector has created would allow it to be efficient beyond measure in doing so.
He does cite some evidence for his fears. Of course, what he fears is based on speculation to where he sees technology taking us in the future, but that's also true of, for example, Musk's and Hawking's fears of AI, and of many other works, or works like Brave New World.
I would like to see Kaczynski and Manson appear by Snowdenbot for a roundtable. Both share dystopian views that inspired them to violent tragedy. I'm curious what their thoughts are on legislation to address regulation of genetic engineering etc.
No, the FBI investigation was named “UNABOM”, the popular media name assigned to the perpetrator before he was identified, inspired by that, was “Unabomber”.
“UNABOMer” seems to be an attempt to preserve the sound of the latter while making it more consistent with the former, but is historically inaccurate.
i flagged this because its essentially terrorist propaganda because of what the guy did. i dont know what the guidelines are regarding such things but i don't think this belongs in a civil place like this.
While I understand the intent of your comment, it's bothersome to think that we can't have a discussion about something purely because some murderer said it once.
That aside, I think he seems to overly doubt the inevitability, which is to completely ignore the fact that if we didn't create these systems then someone else would.
Even if you believe that they're dangerous or harmful (inherently), there's some game theory involved in not having created them in the same way as not having designed the h-bomb when we did. Because another potential world power will and that seems hard to dismiss.
Even in nature you don't have the luxury of standing still. It's been arms race after arms race since the days of single-celled organisms and the ones who adapted are alive today.
> While I understand the intent of your comment, it's bothersome to think that we can't have a discussion about something purely because some murderer said it once.
No, the article is specifically a letter written by a mass-murderer. The objection is not that the ideas are the same, it's that this is literally what the killer wrote.
I completely disagree with the murders and most of what he is saying here for that matter.
But I also disagree with the idea that we can't constructively discuss what he was saying.
Censoring the ideas seems even more dangerous to me. For instance, I'd rather that an impressionable person read the ideas in public where sane people can openly critique them than have them only presented on batshit conspiracy sites.
Let's say A had some theories, and said what they are, and went out and killed some people based on those theories. And let's say that B had some similar theories, and said what they are.
Rejecting A's ideas because those ideas led A to murder is somewhat defensible. It's evidence that there is something deeply morally flawed in A's thinking.
Rejecting B's ideas because they are similar to A's is shakier. B's ideas deserve a hearing on the merits. (Ideas have consequences, and A has shown what A thinks the consequences are. Those consequences can reasonably be considered part of the "merits" of A's ideas.)
terrorism is sort of like speading your ideas by inflicting violence to make headlines. i would not want such tactics to be rewarded in any way. i believe in arguing for your ideas without the use of violence to 'market' them. I would not like to endorse a math publication written by a terrorist. but in addition, this document explicitly references the violence he inflicted on his victim.
Blindly attempting to suppress writings because of the actions of the author, if it worked, might serve to devalue violence as marketing, but it is never going to keep words from the true believers, who will just overvalue them because they're suppressed.
It also damages our ability to discuss the complaints voiced. Like it or not, some ideas that in the past have been promoted via violence turned out to be correct.
Finally, history is important. This, I hope, needs no explanation.
Many people back then read his manifesto. However even then there was no wave of terror based on his core writings and the most radical ones are still popular on conspiracy boards. Still nobody is getting radicalized by it.
This is not a conspiracy board here. People here would me the enemies of this guy and we all see that he was wrong now. So what are you afraid of?
I don't think it is necessary for us to behave as though we fear ideas. Erroneous though some certainly are, I confide that we can recognize and say as much.
its not about the ideas. i dont have a problem with his ideas, i have a problem with how he goes about spreading them. that's why we should not allow this.
"How he goes about spreading them" - unsuccessfully? He's spent the last twenty-one years in prison. He will die there. That, and the pointless harm he did a few innocent people, are the whole extent of his legacy.
I'm impressed to hear that you don't have a problem with his ideas. I certainly do! I went into some detail about that, not long ago: https://aaron-m.com/2017/07/09/on-kaczynski
Perhaps I have failed to take your meaning here. But the trouble with an idea is that you can't kill it by excluding it from what we please ourselves to regard as polite discourse. Kaczynski isn't the only radical anarchoprimitivist in the world. Don't you think it might be a good idea to give food for thought to the next one who might be thinking about blowing people up?
Part of a philosopher's job is to take seriously ideas that no one else much does, and the modern breed tends to thrive on controversy because it shows they're being heard. I suspect that's a lot to do with why Skrbina uses Kaczynski as a hook on which to hang a fairly quotidian, if not to my mind misguided, opinion that our increasingly intimate relationship with technology poses a meaningful risk of deleterious effects too subtle to be obvious in the short or even intermediate term.
I don't think it does much, to help his thesis gain traction, that he should argue it the way he seems to do. But that's his mistake to make, I suppose, and even he strongly disclaims the "blow shit up and kill people" part of Kaczynski's analysis - and that's the only part of his analysis which is genuinely original; Against His-Story, Against Leviathan, just off the top of my head, predates Kaczynski's publication by over a decade.
So I'm not sure that we really can add one otherwise obscure adjunct professor to one otherwise forgettable NYT opinion columnist and end up with meaningful uptake of anything that Kaczynski actually had to contribute, rather than simply an early attempt to find in anarchoprimitivism what value may be there to synthesize with the culture in which we live. A subtle distinction, I concede - but, I maintain, a worthy one nonetheless.
> Perhaps I have failed to take your meaning here.
I cannot pretend to know what timwaagh thinks, but it seems overwhelmingly plausible to me that his comment communicates the view that Kaczynski's behavior turned irretrievably inexcusable at the point when he resorted to actual violence.
Well, most mainstream media organizations actually do have a deeply reactionary attitude towards technology and a deeply naively optimistic view towards Nature, in fact routinely committing the naturalistic fallacy... so... almost all of them, on some level?
It's not a zinger. Almost all major media organizations are techno-reactionary and bio-conservative. They regularly run stories trying to convince people that, for instance, selectively-bred food crops are a Really Bad Idea.