Steve Jackson Games - Site Navigation
Home General Info Follow Us Search Illuminator Store Forums What's New Other Games Ogre GURPS Munchkin Our Games: Home

Go Back   Steve Jackson Games Forums > Roleplaying > GURPS

Reply
 
Thread Tools Display Modes
Old 03-14-2018, 11:18 AM   #21
Johnny1A.2
 
Join Date: Feb 2007
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by VonKatzen View Post

Most of the technology we use today is an extrapolation of stuff from the 1890s to the 1950s. Despite the Science! community selling us a bill of goods I think it's worthwhile to think about whether this is the same botched fantasy that told us jetpacks would be practical and widespread by 1973. The growth of the 19th and 20th centuries may in fact only be so 'fantastic' because it was bottlenecked by economic and political issues for thousands of years.
There have been other periods somewhat like the 19th/20th century over the course of history. I would actually extend the period so it covers about, say, 1650 to 1970, because it also includes a burst of basic science in the early part that enabled a lot of the technological explosion of ~1800-1970.

Interestingly, those other 'high tech' periods, like the Hellenistic era, also lasted about 3 centuries, and shared some political and social and economic parallels with this last one.

I don't see any reason to assume that future surges of advancement won't happen, but predicting them is a crapshoot.
__________________
HMS Overflow-For conversations off topic here.
Johnny1A.2 is offline   Reply With Quote
Old 03-14-2018, 11:50 AM   #22
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Johnny1A.2 View Post
Once reason some of the other substrates that used to be talked about a lot either never took off or remained niche is that we squeezed silicon so hard. It's a little like the internal combustion engine. Almost everything about it is the better part of a century old, but we've refined and refined and refined it to the point where it's far more efficient and effective than most engineers would have considered likely in 1920. But it's not new.
There are several different points where you could say a thing starts: When the basic scientific principle is discovered; when the prototype invention is conceived; when it's built; and when it hits the mass market. For electricity, for example, you have Benjamin Franklin's theory of two types of charge and his unification of static electricity with lightning; you have Volta's battery; you have the Daniell cell, which eliminated the problem of electrodes getting covered with hydrogen gas; and then you have the use of batteries for telegraphy and arc lights on a widespread scale.

By that standard, genetic modification of living organisms is a newer technological revolution than computers; the theoretical basis was established after the first computers were built and around the same time that the transistor was invented, and prototype technologies came decades later. And nanotech is a later technology that shouldn't be discounted. Catalysts with nanoscale design are all through one of the journals I edit; they last longer and provide greater and more specific increases in reaction rate than old-style catalysts.

Then there's modeling of the actual structure of the brain and its ways of handling information. Back when I was at UCSD, it was a radical proposal for a cognitive psychology textbook to say that we could and should study how neurons handle information; the orthodoxy was behaviorism, which said that we had to limit ourselves to external behavior, because internal processes would never be observable and speculating about them wasn't scientific—a statement that now sounds as quaint as Comte's dictum that science could never investigate the chemical composition of heavenly bodies. We're only barely beginning to see hints of applications of cognitive science and neuroscience, but they're likely to have a big impact.
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is online now   Reply With Quote
Old 03-14-2018, 12:15 PM   #23
Johnny1A.2
 
Join Date: Feb 2007
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by whswhs View Post
There are several different points where you could say a thing starts: When the basic scientific principle is discovered; when the prototype invention is conceived; when it's built; and when it hits the mass market.
For our purposes, the ones that matter are 'basic principle discovery' and 'first iteration of functional result'. How long it takes to mass market something is not so much a technological issue as it is social, political, economic, etc. Viceo telephony has been potentially available for decades, it only started to become a used thing lately, but that doesn't make it new.

Quote:

For electricity, for example, you have Benjamin Franklin's theory of two types of charge and his unification of static electricity with lightning; you have Volta's battery; you have the Daniell cell, which eliminated the problem of electrodes getting covered with hydrogen gas; and then you have the use of batteries for telegraphy and arc lights on a widespread scale.
Which is a good example of why I extend the time period of the recent technological explosion back further than the early 19C, it also needs to include the preliminary science that enables it. For our purposes, electrification should probably be dated to the late 19C.

Note again with regard to mass market, though: electrification has been a serious thing since the late 19C, but many rural areas and small towns only really achieved full electrification in the USA as late as the 1940s, almost half a century later, through a combination of natural spread and some government encouragement.

Quote:

By that standard, genetic modification of living organisms is a newer technological revolution than computers; the theoretical basis was established after the first computers were built and around the same time that the transistor was invented, and prototype technologies came decades later.
I would agree with that. Crick and Watson laid out the basic concepts of DNA in 1953, that would be an example of basic new science. The applications of it are still coming, with things like CAS-CRISPR and the like.

I never said all progress has stopped. I said it's slowed down considerably since the peak. For that matter, new basic science is still coming on the subject of heredity, epigenetics and the like are relatively new science.

Quote:

And nanotech is a later technology that shouldn't be discounted.
I don't discount it. I do think that Drexler is pitching a mix of about 5% reasonable speculation, 5% wild speculation, and 90% hype and wishful thinking, esp. on things like social impact.

I have no doubt that nanotech research and development is leading to interesting things, though. But the most significant results will, I suspect, be other than those most ardently hoped for by the enthusiasts.

Quote:


Then there's modeling of the actual structure of the brain and its ways of handling information. Back when I was at UCSD, it was a radical proposal for a cognitive psychology textbook to say that we could and should study how neurons handle information; the orthodoxy was behaviorism, which said that we had to limit ourselves to external behavior, because internal processes would never be observable and speculating about them wasn't scientific—a statement that now sounds as quaint as Comte's dictum that science could never investigate the chemical composition of heavenly bodies. We're only barely beginning to see hints of applications of cognitive science and neuroscience, but they're likely to have a big impact.
This is an area where a lot of basic science remains to be done, and when it gets done is unguessable because we don't even know all the right questions yet.

Note that the behaviorism/internal processes argument in psychology parallels the arguments about artificial intelligence now. There's the group that says the output is defining, the 'Turing test' crowd that maintain that if you can't tell the difference in the output of the machine and the human, you must assume the machine is conscious. Then there is there opposition.

Philosophically, the Turing Test approach falls apart when closely examined.
__________________
HMS Overflow-For conversations off topic here.
Johnny1A.2 is offline   Reply With Quote
Old 03-14-2018, 12:24 PM   #24
VonKatzen
Banned
 
Join Date: Mar 2018
Default Re: No AI/No Supercomputers: Complexity Limits?

A major point of technical advancement is economics. No matter what you know and can do in theory you actually need a massive supply chain to produce it. Making a modern handgun in the middle ages might be technically possible but it would require so much precision, effort, time and money that you could produce hundreds or thousands of crossbows for the same cost. Even assuming they knew how, therefor, no one in the middle ages would bother with making an HK USP, because you could equip an army for the same price.

This is a big problem with a lot of Science! predictions as they currently stand. Making microscopic machines out of supercarbon is entirely possible, but it's also entirely impractical. A lot of the sci-fi community extrapolates based on technical data without adequately considering the resources, division of labor and supply chains required to make such advancements actually useful.

While it's entirely possible that some unforeseen and incredible technical advances may be made in the future (even the near future) what isn't nearly so certain is that the massive improvements in manufacturing and distribution necessary to afford them will accompany it. For decades particle physics has been well ahead of any remotely useful application (at least for the people who aren't grant-funded nerds working at CERN, and even then they've got theories that exceed their ability to actually test them).

Likewise for future-tech is that consequent advances in the technical arts and economic productive capacities in some areas may not lead to existing technology being replaced at all. Handguns are a perfect example. Lasers have been used for decades, but so far no laser weapon has ever been used in battlefield conditions - even the far less impressive 'dazzler' type of laser. And while ten or twenty years down the road may make a man-portable-flesh-boiler sufficiently rugged and effective to be used one can likewise infer that many of those same advancements could be used to produce virtually indestructible rifles with a hundred rounds of ammunition that only weigh as much as modern guns. Since the accuracy of firearms already exceeds the visual and coordination capabilities of 99% of soldiers the technical improvements in lasers may be completely irrelevant.

The same could be said for all sorts of high-tech melee weapons. Other than home-owners with baseball bats and cops with billy clubs basically nobody uses melee weapons, because you can learn to shoot a gun more cheaply and easily than you can learn to fight with a mace. Even if you could build vibro chainswords there is a strong possibility that nobody will, because LAWs have the same effect at a lower cost and can be used from three miles away.

Modern ships utilize very similar overall designs to sailing ships and longships. The materials are different and so are the engines, but the basic idea is to make a boat that doesn't sink in high seas. Need to defend yourself? Well, you could build a rail gun or a laser onto it, but it seems like cannons and missiles are still the preferred option.

Another point: earlier today I was reading a post where someone said that Dyson Spheres are definitely possible. Well, they may be logically consistent mechanical designs but they may be totally impossible from a physical point of view. It may be impossible to ever actually acquire the sheer mass and type of materials required to build one (even a Ringworld would require multiple solar systems of matter). If the curmudgeons are right space is mostly useless because it's too inhospitable and everything is too far apart. For the resources it would require to mine asteroids or build a settlement on Mars you could build an entire city with super maglev trains that would hold thousands or millions of times as many people in a far more accessible and comfortable location. Literally everything is already on Earth, which is better than any space station that could ever be built. Thus even with super-levels of tech there may be virtually no space infrastructure outside of NEO because it's simply not worth doing.

The point is that even a very high tech society might resemble ours in most ways. I think that, ideological and centralization differences aside, ancient Rome was a lot closer to modern societies than people might think.

Furthermore, if you had the kind of technology and resources to do things like build Dyson Spheres and travel FTL and have self-feeding nanomachines it is extremely unlikely anything resembling biological human beings would exist and many of the familiar categories like 'politics' would become basically meaningless. Likewise with warfare - if you can open a hole in space time you can destroy the entire solar system with one attack. In fact the sheer energy concentration required to do so would probably destroy the solar system even if that wasn't your intent.

The most plausible sci-fi that isn't completely post-human and unrecognizable is basically Cyberpunk, minus the Matrix-style computers that for some inexplicable reason have deadly positive feedback built in.

A lot of Science! and science fiction is much lighter on philosophical and economic considerations than it is one technical ones. Almost every bit of futurist speculation could be critiqued on these grounds, but most of the advocates not only have no answer but are oblivious to the question.

Finally, intelligence (in the human sense) is based on understanding and comprehension, and not data. They are fundamentally different things. Computers only deal in data, and without the comprehension they will never be intelligent. There may be many ways of building intelligence (or only a couple), but adding more operations per minute up to infinity will never produce understanding. Intelligence is a product of the physical relations of material objects, computers are just a wheel-gear mechanism to help us do math problems and can be built out of literally anything, from silicon to beer cans. Computing is literally not the same thing as intelligence, and computing without intelligence may (at best) help a robot avoid walking into walls. But even that's only possible because of deliberate design by intelligent creatures.

Ants are a perfect example of a totally mindless computer machine. They can wander around and build more ants, etc, but they know nothing, and will never know anything, no matter how big the ant hill gets. They need fundamentally different brains for that.

Last edited by VonKatzen; 03-14-2018 at 12:38 PM.
VonKatzen is offline   Reply With Quote
Old 03-14-2018, 12:35 PM   #25
GodBeastX
 
GodBeastX's Avatar
 
Join Date: Dec 2008
Location: Behind You
Default Re: No AI/No Supercomputers: Complexity Limits?

I get what you're saying and that might be why I liked the Aliens franchise because if felt more conservative from a sci fi standpoint. There's really nothing you see in Aliens that can't be done with a lot of what we have today.

However, complexity limits are something that I think confuses people quite a bit when it comes to programming. Much like manufacturing and production, the person creating the end product may not or need not know how everything beneath it works.

To give you an high level example:

Playing a video game has hardware like mice and keyboards, this has been encapsulated into USB HID framework, which the spec was written by one person, implemented by another in easy to use chips.

These chips are sold to someone who needs to make a keyboard. That keyboard is plugged into the computer and the OS handles the input and output to a point the software engineer is calling functions like GetKeyboardState which can tell them the press/unpressed state of the entire keyboard.

So that individual then writes a library that takes all that and turns it into game events that some other programmer might configured in a system like Unity.

Then the game programmer using the Unity framework just goes "I need a jump event, and when that happens I move a sprite object".

Look at this forum, this goes up quite a number of chains of software interaction to the point a web developer has to put an input object and post to a CGI/Database greatly simplifying his programming effort.

So yes, there is a lot of complexity between in the final software but most programmers are only writing a fraction of it. Just like there is a lot of complexity in a car, but the guy who changed your washer blades for you didn't need to know material science to create a good rubber to wipe away the water from your windshield.

I don't think GURPS conveys that very well in just complexity.

SO is there a limit for complexity? No. Programmers all build on the shoulders of each other and as long as everyone is doing what they do well, you've have very few bugs. You only have to look at the difference between MS DOS and Windows 10 to see how vast complexity can get in your basic operating system and how pretty pain free it is for you to type on this forum right now.

I have been working on a framework to convey all this better in simple to understand terms.
__________________
RPG Jutsu.com - Ninjas Play GURPS
GodBeastX is online now   Reply With Quote
Old 03-14-2018, 01:09 PM   #26
RyanW
 
RyanW's Avatar
 
Join Date: Sep 2004
Location: Southeast NC
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by GodBeastX View Post
However, complexity limits are something that I think confuses people quite a bit when it comes to programming. Much like manufacturing and production, the person creating the end product may not or need not know how everything beneath it works.
To use a different metaphor, gunpowder production was using potassium nitrate as an oxygen source centuries before oxygen was discovered. In computer terms, gunpowder manufacturers were using the saltpeter.MakeFireBurnFaster() method without knowing the programming behind it.
__________________
RyanW
- Actually one normal sized guy in three tiny trenchcoats.
RyanW is offline   Reply With Quote
Old 03-14-2018, 01:21 PM   #27
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by Johnny1A.2 View Post
I would agree with that. Crick and Watson laid out the basic concepts of DNA in 1953, that would be an example of basic new science. The applications of it are still coming, with things like CAS-CRISPR and the like.

I never said all progress has stopped. I said it's slowed down considerably since the peak. For that matter, new basic science is still coming on the subject of heredity, epigenetics and the like are relatively new science.
I don't think "slowed down" is even necessarily the case. Yes, there was progress in fundamental physics in Newton's time, and then in Faraday's and Maxwell's, and then in the first half of the twentieth century. But in Newton's time there was very little progress in biology or medicine. Even simple taxonomy didn't become scientific till Linnaeus in the late 18th century. Victorian medicine developed anesthesia and antisepsis, but effective drugs that weren't likely to kill you started between the World Wars; Mendelian genetics found applications around the same time (after Mendel's work fell into obscurity for a long time); and the molecular basis was decades later. I was just reading a book by Daniel Dennett that cites a paragraph by Bateson, one of the founders of early twentieth century genetics, who says that the material of the chromosomes is an undifferentiated mass and it is not even conceivable that physics or chemistry could ever explain how heredity worked—it had to be the working of some vital force.

And yet, though biology is in one sense reducible to physics, for human purposes it's a largely independent discipline and one that's likely to be just as important to us. It's already made some changes that affect the texture of life; for example, when I read last night of Hawking's death I thought, "76? That seems a bit young, but given his health problems I suppose it's not." A century ago, dying at 76 would have been considered a full life and maybe a bit extra. The big older population is having a massive impact on human societies.

Or consider one of my personal hobby horses, taxonomy. Just over the past twenty years, genetic sequencing of vast numbers of species, and comparison of DNA similarity, has radically changed our views of phylogenetic relationship and taxonomy, with some help from plate tectonics. For example, the "African mammals" are now a group that includes African "insectivores," elephant shrews, aardvarks, hyraxes, elephants, and manatees; the closest relatives of whales are thought to be hippopotami rather than carnivores, and the closest relatives of carnivores are pangolins, which used to be considered rather basal placentals, and so on. Classification by morphological similarity has given place to classification by actual genetics. This is as big a change as Linnaeus coming up with the whole project or Darwin proposing that taxonomy should recapitulate phylogeny. This probably doesn't have many applications, but as science it's revolutionary.

I think that biology came into being around 1800, started to take off after 1900, and is in a period of rapid growth now. And there's an area where enhanced computation has made a huge difference, because biological systems are insanely complex, and also where the theory of computers is influential, because genetics turns out to be a matter of programming. (As is development; one of Alan Turning's important papers was a proposal for a theory of morphogenesis as an outcome of chemical processes.)
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is online now   Reply With Quote
Old 03-14-2018, 01:23 PM   #28
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by GodBeastX View Post
I get what you're saying and that might be why I liked the Aliens franchise because if felt more conservative from a sci fi standpoint. There's really nothing you see in Aliens that can't be done with a lot of what we have today.

However, complexity limits are something that I think confuses people quite a bit when it comes to programming. Much like manufacturing and production, the person creating the end product may not or need not know how everything beneath it works.
Sure. There's a clear parallel between Leonard Read's essay "I, Pencil," which explains that no one actually knows how to make a pencil (not from raw materials in all the details), and Vernor Vinge's idea of a "mature programming environment," where it's programs all the way down.
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is online now   Reply With Quote
Old 03-14-2018, 03:28 PM   #29
AlexanderHowl
 
Join Date: Feb 2016
Default Re: No AI/No Supercomputers: Complexity Limits?

It is actually much easier to make a pencil from basic materials than to program directly from binary (I learned how to do the former as a boy scout when I was 12). I will never make a pencil though because it takes two or three hours to make a pencil by hand, assuming you have the parts, mostly due to the time that it takes the glue to dry, and it is just easier to buy a mechanical pencil. It would probably take weeks for each pencil if you had to make the materials without modern technology.
AlexanderHowl is offline   Reply With Quote
Old 03-14-2018, 05:26 PM   #30
whswhs
 
Join Date: Jun 2005
Location: Lawrence, KS
Default Re: No AI/No Supercomputers: Complexity Limits?

Quote:
Originally Posted by AlexanderHowl View Post
It is actually much easier to make a pencil from basic materials than to program directly from binary (I learned how to do the former as a boy scout when I was 12). I will never make a pencil though because it takes two or three hours to make a pencil by hand, assuming you have the parts, mostly due to the time that it takes the glue to dry, and it is just easier to buy a mechanical pencil. It would probably take weeks for each pencil if you had to make the materials without modern technology.
And was your pencil the kind that you can buy a box of at the drugstore or the stationer's, with cylindrical graphite, cedarwood enclosing it, bright yellow paint, a rubber at one end, and a bit of brass holding it in place? That was the kind Read was referring to. I didn't think it was necessary to quote him at length because I thought the point of his example was sufficiently clear.
__________________
Bill Stoddard

I don't think we're in Oz any more.
whswhs is online now   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Fnords are Off
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 10:38 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.