Passing thoughts and miscellany 8 Oct 20118 Oct 2011 First of all it occurs to me that people who expect the Singularity to occur simply do not get the logistic growth curve. I’ll just throw that out there. Second, the Great Migration Back to the Homeland (i.e., my move back to Melbourne) happens this week so I will probably not post anything interesting in the next little while (shut up in the peanut gallery!). Third, the new Carnival of Evolution is out! I get a mention (because I told them to put me in). Fourth, Greg Laden is proliferating. In addition to his Scienceblogs site, he also has one at Freethought blogs called the X Blog (it is occasionally X rated, but that’s not why it’s called that). He wants you to go read a couple of stories here and here and donate money to Minnesotan education. Finally I have been asked to contribute a chapter to a forthcoming book Philosophy and the Rise of the Planet of the Apes. My chapter will be called Rise of the Moralists. I bet you can’t think what I’ll be discussing. No, really… Administrative Evolution Humor Species and systematics
History Bones of the founder of Christianity confirmed 29 Jun 2009 The bones are likely to be Paul’s, says the Pope. Who did you think I was talking about? Read More
Administrative More Arizona fauna! 11 Mar 200818 Sep 2017 John Lynch took me to the Arizona-Sonora Desert Museum yesterday, and made me walk. Naturally I forgot my camera, so I can’t show you the really cool hummingbirds, or the cougar/puma (it has a split personality) or the bighorn sheep, let alone the amazing diversity of plant life (until I… Read More
Humor Why I like George Clooney 30 May 2008 Clooney deals with the attention in a self-deprecating fashion, usually making jokes about himself. “That’s the Australian way, it’s the right way to do it,” he says. From this article… Read More
The singularity may well never happen, but the existence of a curve that asymptopes isn’t of any relevance unless you can also cite the causative conditions that corresponds to your point labelled ‘carrying capacity’.
There are several. One is the speed of light means that any physically realiseable system is going to have limits to its information carrying capacities. Another is the technological limitations of getting something closely approximating a digital system at quantum scales (qubits notwithstanding): at some point quantum tunneling and noise makes it impossible. A third is the amount of physical resources needed to produce such computers – every increasing in speed involves exponentially more resources compared tot he last such increase. A fourth is time: we have a limited amount of it to create and maintain such devices. A third is food: we need to produce enough of it while we work on these things in order to get the necessary division of labor in our society. All these things put some upper bound on how fast and capacious our computers can get. Now this is not necessarily a fixed upper boundary, but the degree of exponential growth required will eventually slow down, and I reckon it is already doing so. The Singularity, even if it were a theoretical possibility, which is arguable, is not physically likely to occur until we can harness the resources of entire planets.
I understand the singularity as exponential growth in artificial intelligence to a level far exceeding our own. It’s not clear that the limits you pose are sufficient to prevent it. Lightspeed surely doesn’t – nerve impulses travel much slower than that. Arguments about time, food and resources become irrelevant once the AI is clever enough to revolutionise our technology and take over most tasks from us.
Most of your limits are all good points, but are we anywhere near them? It’s hard (at least for me :)) to measure technological progress, no single figure seems satisfactory. My impression, opinion really, is that for over 100 years, technological progress has been predicted to slow down, but have we really seen any slowing yet?
have we really seen any slowing yet? The example that affects me the most is the amount of electrical power people are willing to devote to supercomputing. The US Department of Energy is pushing for a machine 1000x faster than what we currently have but are only willing to double the amount of power required (20 megawatts is the design estimate). When you look at the power consumed by the top-500 machines over time the total power is flatlining — machines are getting faster, but that’s because people are now buying slower, more efficient processors (and a lot more of them) rather than buying lots of the fastest available processor and letting the institution worry about the electric bill. Processor and memory speeds have pretty much topped out as well (with GPUs being an interesting wrinkle). I think core counts will top out soon as well — it’s all well and good to have a few thousands cores on a chip, but it’s just not that useful if your DRAM to core ratio is 10KB-ish.
Hi. Like Acleron, I don’t understand how the logistic growth curve is relevant to the “singularity” predictions. Those predictions are based on expectations of scientific and technological advances. Is there any reason to think that we are approaching saturation for these advances? (IIRC, Daniel Dennet made an argument that such advances can never be saturated)
Artificial intelligence is parallel computation and has less than linearly efficient. Degeneracy can bring about multiplicative increase in efficiencies.
I don’t know whether technological progress is best modeled by logistic curves; but in my experience, it’s often characterized by serendipity. One usually thinks of serendipity as something positive; but in the original sense of the word, it referred to coming up with something useful but fundamentally disappointing while searching for the real goal. It’s about consolation prizes. Coronado and his men struggle across thousand miles of desolate wasteland in search of El Dorado and discover…Kansas. Or, to cite an example more relevant to technology, a chemist invents chewing gum in an effort to turn chicle into usable rubber. More generally, researchers can and will discover all sorts of things in the quest for the singularity, but the singularity probably isn’t one of them. Cheer up. The elixir of eternal life probably isn’t available; but even if chemistry isn’t exactly alchemy, it’s better than a lousy tee shirt. In any long-range endeavor, the odds of getting what you want are very small. The trick to keeping up morale is to retroactively represent what you do find as what you were looking for all along.
John S. Wilkins: We crossed in the comment stream: see above Perhaps the crossover to ‘singularity’ occurred much before that? Maybe the logistical growth curve is incapable of providing a causal explanation.
No equation is explanatory without an interpretation in the real world, but the logistic curve applies to most real world growth one way or the other.
Nice rephrase. It aligns consideration along the axis of time and leaves open the possibility of further inclusions, both external and internal.
Take a log at a (log-scaled) graph of human population for the last ten thousand years. The Singularity happened in the 19th Century; we called it the Industrial Revolution 🙂
There are a variety of different notions of people mean by a singularity. See e.g. http://yudkowsky.net/singularity/schools The primary “singularity” notion which you are talking about is the Kurzweil style singularity. I don’t think that pointing to logistic growth curves is that persuasive in this sort of context. The total size of the curve matters a lot more. If the curve doesn’t start flattening out for a fair bit then it could be effectively exponentially for long enough to matter. So what really matters is where exactly the limits are. In that context, I think your lists of restraints at 1:22 is a much better set of arguments against this sort of singularity. The other type of singularity worth talking about is the I. J. Good style singularity in which an artificial intelligence quickly bootstraps to the point where its capabilities (especially intelligence) far outstrip those of humanity. I consider this sort of singularity to be more plausible, but still extremely unlikely. In addition to the same restrictions you mention in your comment, this sort of thing also runs into other problems. In particular, there’s a limit to how much more advance you can make software simply by self-modifying. Many of these restrictions stem from theoretical computer science. For example, if P != NP in a strong sense, then there are substantial barriers to how much an AI can reprogram itself in an actively useful fashion. This puts other limits on it, since many things that an AI would want to do to make itself more efficient quickly run into NP-hard problems. (For example, memory management touches on graph coloring which is an NP-complete problem. Similarly, versions of the traveling salesman problem show up in circuit design.)
The Kurzweil kind is what I had in mind, but that link to Yudkowsky’s post is instructive. I do, whoever, think that as a matter of fact (i.e., not math), the “carrying capacity” limit is well below anything needed for his kind of Singularity as well. I do not deny that an AI will be possible (it is not a qualitative threshold IMO but like everything else quantitative differences in capacities), nor that such AIs may be smarter than us on many measures (they already are on some metrics) but the kind of differences that Singularists of all stripes seem to want/expect/fear do not seem to me to be physically realisable, quite apart from the NP Hardness issue. What we will get are physical systems that do some things much better than we do and some things much worse. Intelligence is an ecological network, and like all such networks there have to be tradeoffs. Incidentally, I love Yudkowsky’s work, so this is a rare disagreement.
John S. Wilkins: Intelligence is an ecological network, and like all such networks there have to be tradeoffs. For it’s 1st version philosophical inquiry discounted and excluded ‘subject’. Paring away that which is unreliable and vaguely indefinite is tantamount to embracing the local ‘atom’ by discarding distributed ‘potential’ Having set out by excluding subjective experience outright, philosophy gets trapped[*] in it’s efforts to repair and recover the excess loss brought on by the unilateral preemptive premature ‘rough cut’ [*] Cesarean paradox (Nosology as applied to philosophy)