Riderless Bikes
#1
Senior Member
Thread Starter
Join Date: Mar 2013
Posts: 76
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 8 Post(s)
Likes: 0
Liked 0 Times
in
0 Posts
Riderless Bikes
#2
working on my sandal tan
Join Date: Aug 2011
Location: CID
Posts: 22,302
Bikes: 1991 Bianchi Eros, 1964 Armstrong, 1988 Diamondback Ascent, 1988 Bianchi Premio, 1987 Bianchi Sport SX, 1980s Raleigh mixte (hers), All-City Space Horse (hers)
Mentioned: 97 Post(s)
Tagged: 0 Thread(s)
Quoted: 3729 Post(s)
Liked 2,279 Times
in
1,431 Posts
#4
Banned
David Gordon Wilson MIT Professor and author of Bicycling Science demonstrated the stability of a Bike,
with the fork backwards so Lots of Trail, and let it run down a Hill with nobody on it, and it went quite straight.
with the fork backwards so Lots of Trail, and let it run down a Hill with nobody on it, and it went quite straight.
#5
What happened?
Join Date: Jun 2007
Location: Around here somewhere
Posts: 8,050
Bikes: 3 Rollfasts, 3 Schwinns, a Shelby and a Higgins Flightliner in a pear tree!
Mentioned: 57 Post(s)
Tagged: 1 Thread(s)
Quoted: 1835 Post(s)
Liked 291 Times
in
254 Posts
It'll get better than you are though and make you look bad.
__________________
I don't know nothing, and I memorized it in school and got this here paper I'm proud of to show it.
#6
Senior Member
Join Date: Aug 2015
Location: 'Murica
Posts: 234
Bikes: Fuji Allegro
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 44 Post(s)
Likes: 0
Liked 1 Time
in
1 Post
This couldn't be more pointless.
Also, the complete elimination of human error opens the door for fatal software glitches. Remember those Toyotas that would throttle up on their own? Run you right off into your own death without warning.
I'll stop piloting my own vehicles when I'm dead. Hopefully not killed by someone's self driving automobile.
Also, the complete elimination of human error opens the door for fatal software glitches. Remember those Toyotas that would throttle up on their own? Run you right off into your own death without warning.
I'll stop piloting my own vehicles when I'm dead. Hopefully not killed by someone's self driving automobile.
#7
Member
Join Date: Feb 2016
Location: Moore, OK
Posts: 38
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Likes: 0
Liked 1 Time
in
1 Post
This is great for the bikes. Think about it, while I'm in the grocery store, my bike can be out on the trails having some fun and working out its bearings. I'll just summon it with my phone about the time I pass the canned veggie aisle, and by the time I finish up (at the UNMANNED check-out, no less), my bike will be crashing into the bike rack at the front of the store. FINALLY, free-range biking!
It will take some time for the general public to get used to riderless bikes whizzing by all the time, the 911 calls to stop, and little kids to forget everything they saw in Bed Knobs and Broomsticks. But it will surely be worth the transition.
This will benefit the poorest members of society. The pan-handlers downtown who are cruising around on bikes all the time can send their bikes out for a riderless ride, and spend more time themselves asking for money. Of course, humans are never really satisfied....about the time everyone has one of these, we'll all be wanting runnerless shoes.
Matt
It will take some time for the general public to get used to riderless bikes whizzing by all the time, the 911 calls to stop, and little kids to forget everything they saw in Bed Knobs and Broomsticks. But it will surely be worth the transition.
This will benefit the poorest members of society. The pan-handlers downtown who are cruising around on bikes all the time can send their bikes out for a riderless ride, and spend more time themselves asking for money. Of course, humans are never really satisfied....about the time everyone has one of these, we'll all be wanting runnerless shoes.
Matt
#10
Senior Member
Join Date: Dec 2007
Posts: 832
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 90 Post(s)
Likes: 0
Liked 17 Times
in
15 Posts
since I also ride a motorcycle I completely believe that, especially cars.
stepping back into serious (because this has gotten fah fah too silly) I wonder how the car sensors would pick up a smaller object near them like a bike or motorcycle, especially since the motorcycle can run at the same speeds and would be occupying a lane on it 's own. say, if a car was in the left lane and a cycle was on the right, on the right side of it's lane.
stepping back into serious (because this has gotten fah fah too silly) I wonder how the car sensors would pick up a smaller object near them like a bike or motorcycle, especially since the motorcycle can run at the same speeds and would be occupying a lane on it 's own. say, if a car was in the left lane and a cycle was on the right, on the right side of it's lane.
#11
What happened?
Join Date: Jun 2007
Location: Around here somewhere
Posts: 8,050
Bikes: 3 Rollfasts, 3 Schwinns, a Shelby and a Higgins Flightliner in a pear tree!
Mentioned: 57 Post(s)
Tagged: 1 Thread(s)
Quoted: 1835 Post(s)
Liked 291 Times
in
254 Posts
Isn't the entire point of aq bicycle to get on and use your own power efficiently as possible in getting someplace?
Why does this sound as bad as the toilet paper commercials that said you could 'go commando' after usage?
Why does this sound as bad as the toilet paper commercials that said you could 'go commando' after usage?
__________________
I don't know nothing, and I memorized it in school and got this here paper I'm proud of to show it.
#12
Member
Join Date: Feb 2016
Location: Moore, OK
Posts: 38
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Likes: 0
Liked 1 Time
in
1 Post
On a serious note, I firmly believe that driveless cars will *never* be truly workable in the current real-world environment until/unless artificial intelligence makes some big breakthroughs. Philphine brings up a great point about the driverless car figuring out if a motorcycle or other smaller object is occupying a lane. Without really awesome (and not-yet-attained) AI, any attempt to handle these sorts of situations individually are just limited-scenario hacks.
Matt
Matt
#13
working on my sandal tan
Join Date: Aug 2011
Location: CID
Posts: 22,302
Bikes: 1991 Bianchi Eros, 1964 Armstrong, 1988 Diamondback Ascent, 1988 Bianchi Premio, 1987 Bianchi Sport SX, 1980s Raleigh mixte (hers), All-City Space Horse (hers)
Mentioned: 97 Post(s)
Tagged: 0 Thread(s)
Quoted: 3729 Post(s)
Liked 2,279 Times
in
1,431 Posts
On a serious note, I firmly believe that driveless cars will *never* be truly workable in the current real-world environment until/unless artificial intelligence makes some big breakthroughs. Philphine brings up a great point about the driverless car figuring out if a motorcycle or other smaller object is occupying a lane. Without really awesome (and not-yet-attained) AI, any attempt to handle these sorts of situations individually are just limited-scenario hacks.
Matt
Matt
#14
Junior Member
Join Date: Oct 2016
Posts: 9
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Likes: 0
Liked 0 Times
in
0 Posts
On a serious note, I firmly believe that driveless cars will *never* be truly workable in the current real-world environment until/unless artificial intelligence makes some big breakthroughs. Philphine brings up a great point about the driverless car figuring out if a motorcycle or other smaller object is occupying a lane. Without really awesome (and not-yet-attained) AI, any attempt to handle these sorts of situations individually are just limited-scenario hacks.
Matt
Matt
#15
Junior Member
Join Date: Oct 2016
Posts: 9
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Likes: 0
Liked 0 Times
in
0 Posts
It's not a great point, it's a stupid point. The cameras and software on driverless cars are much better able to see and keep track of the traffic around them than we humans are.
https://www.youtube.com/watch?v=tiwVMrTLUWg
https://www.youtube.com/watch?v=tiwVMrTLUWg
#16
What happened?
Join Date: Jun 2007
Location: Around here somewhere
Posts: 8,050
Bikes: 3 Rollfasts, 3 Schwinns, a Shelby and a Higgins Flightliner in a pear tree!
Mentioned: 57 Post(s)
Tagged: 1 Thread(s)
Quoted: 1835 Post(s)
Liked 291 Times
in
254 Posts
What is the point of a riderless bike though? Will there be a Riderless Bike Forums in the future?
It's a parlor trick!
It's a parlor trick!
__________________
I don't know nothing, and I memorized it in school and got this here paper I'm proud of to show it.
#17
Member
Join Date: Feb 2016
Location: Moore, OK
Posts: 38
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Likes: 0
Liked 1 Time
in
1 Post
Come on ThermionicScott, tell me how you really feel about my opinion. 
That's certainly a very interesting response, but from a programmer's perspective, its significance is hard to quantify without knowing HOW it arrived at that answer.
Hope springs eternal, but I can't imagine an AI ever eclipsing the reasoning and decision-making ability of a human. The thing about humans is that we have something an AI can never have: imagination. It follows that we therefore also have insight. Because of this, no matter what situation arises, no matter how extreme the circumstance, no matter how unplanned the event, a human always has the capacity to evaluate the situation and develop some sort of rational response. It may not always be the best response, and it may not even be the correct response, but there will be a plan of action that takes the immediate factors of the situation into account.
I'm sure anyone who's ever spent any time with a computer or other electronic gadget (particularly if you've had to support, fix, or develop software) can relate to the concept of "babysitting" the software. You may have something that works very well, but every now and then there is that unforeseen scenario in which the software loses its "mind" and does something completely unpredicted or silly. Yet, a human might look at that scenario and know immediately what to do (or, at the very least, know that what the software is doing is NOT a good response).
The aviation industry has been into this field for a long time. Airplanes with the right equipment can practically fly themselves. But, we really can't trust the systems entirely. This point is driven into pilots throughout their training, and is reinforced through embarrassing incidents and tragic accidents. There are numerous instances in which an autopilot or flight control system will "throw its hands up" and simply quit flying the airplane (Air France 447, for instance), at which point the pilot (who has hopefully stayed in the loop, but often is not) must assume control. There are also other instances in which the computer does something strange and dangerous, something a human pilot would never do (though in most such cases, the computer will just give up and disengage, thanks to fail-safes built into the software).
Now, compare this to cars: quick, appropriate responses are FAR less important in an airplane, where a pilot (or computer) has the luxury of wandering around the sky a few thousand feet here or there, heading the wrong direction for a bit, not properly controlling airspeed, etc. That's a lot different from the world of driving, where we are regularly mere inches away from another car coming the opposite direction at a differential speed of 130mph+.
That's not to say I don't see driverless cars making a big impact in the future. If our road system is modified a bit and made more uniform, if the car is only expected to go on relatively improved roads, if GPS is always available and/or some sort of computer-friendly road marker system is devised, if extremely twisty or hilly roads can be properly dealt with by the computer, if specific types of inclement weather are avoided, and IF the driveless cars are kept in good repair (and we know some won't be), then they may be reasonably workable.
And, who knows, maybe there will be a paradigm shift in AI and we'll see something that completely bypasses the current AI pitfalls, and can truly challenge the human mind's ability to reason. I'm happy to be proven wrong.
Matt

Last year, AI made a huge leap forward, by passing the "three wise men" test, in which three robots programmed with the same AI, but two of them were muted. Then each one was asked who could speak. Two of them said nothing, and one said "I do not have enough information to answer that question" and then corrected itself by saying "wait, actually I believe I have a voice" this is a huge step forward, as it marks the first time a robot has officially acknowledged it's own existence.
Hope springs eternal, but I can't imagine an AI ever eclipsing the reasoning and decision-making ability of a human. The thing about humans is that we have something an AI can never have: imagination. It follows that we therefore also have insight. Because of this, no matter what situation arises, no matter how extreme the circumstance, no matter how unplanned the event, a human always has the capacity to evaluate the situation and develop some sort of rational response. It may not always be the best response, and it may not even be the correct response, but there will be a plan of action that takes the immediate factors of the situation into account.
I'm sure anyone who's ever spent any time with a computer or other electronic gadget (particularly if you've had to support, fix, or develop software) can relate to the concept of "babysitting" the software. You may have something that works very well, but every now and then there is that unforeseen scenario in which the software loses its "mind" and does something completely unpredicted or silly. Yet, a human might look at that scenario and know immediately what to do (or, at the very least, know that what the software is doing is NOT a good response).
The aviation industry has been into this field for a long time. Airplanes with the right equipment can practically fly themselves. But, we really can't trust the systems entirely. This point is driven into pilots throughout their training, and is reinforced through embarrassing incidents and tragic accidents. There are numerous instances in which an autopilot or flight control system will "throw its hands up" and simply quit flying the airplane (Air France 447, for instance), at which point the pilot (who has hopefully stayed in the loop, but often is not) must assume control. There are also other instances in which the computer does something strange and dangerous, something a human pilot would never do (though in most such cases, the computer will just give up and disengage, thanks to fail-safes built into the software).
Now, compare this to cars: quick, appropriate responses are FAR less important in an airplane, where a pilot (or computer) has the luxury of wandering around the sky a few thousand feet here or there, heading the wrong direction for a bit, not properly controlling airspeed, etc. That's a lot different from the world of driving, where we are regularly mere inches away from another car coming the opposite direction at a differential speed of 130mph+.
That's not to say I don't see driverless cars making a big impact in the future. If our road system is modified a bit and made more uniform, if the car is only expected to go on relatively improved roads, if GPS is always available and/or some sort of computer-friendly road marker system is devised, if extremely twisty or hilly roads can be properly dealt with by the computer, if specific types of inclement weather are avoided, and IF the driveless cars are kept in good repair (and we know some won't be), then they may be reasonably workable.
And, who knows, maybe there will be a paradigm shift in AI and we'll see something that completely bypasses the current AI pitfalls, and can truly challenge the human mind's ability to reason. I'm happy to be proven wrong.

Matt
#18
working on my sandal tan
Join Date: Aug 2011
Location: CID
Posts: 22,302
Bikes: 1991 Bianchi Eros, 1964 Armstrong, 1988 Diamondback Ascent, 1988 Bianchi Premio, 1987 Bianchi Sport SX, 1980s Raleigh mixte (hers), All-City Space Horse (hers)
Mentioned: 97 Post(s)
Tagged: 0 Thread(s)
Quoted: 3729 Post(s)
Liked 2,279 Times
in
1,431 Posts

Hope springs eternal, but I can't imagine an AI ever eclipsing the reasoning and decision-making ability of a human. The thing about humans is that we have something an AI can never have: imagination. It follows that we therefore also have insight. Because of this, no matter what situation arises, no matter how extreme the circumstance, no matter how unplanned the event, a human always has the capacity to evaluate the situation and develop some sort of rational response. It may not always be the best response, and it may not even be the correct response, but there will be a plan of action that takes the immediate factors of the situation into account.
#19
Senior Member
Join Date: Jun 2015
Location: Hudson Valley, New York
Posts: 481
Bikes: 2014 Giant Roam
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 84 Post(s)
Likes: 0
Liked 0 Times
in
0 Posts
As another article said about Hotz and Tesla's automated driving to make it 99% accurate is easy to make it 99.9999% accurate is much harder and these systems need to be that accurate. Also cars just stopping even 1% of the time would clog up every highway. There's an easy 200 cars going by at any time and if 2 are always stopped thats problems. But you're just pulling these numbers out of the air, don't think you mean 1% is ok. Further cars stopping in the middle of the road is not safe. If the error is on the person we can blame thats one thing but if the error is on the product that's endless litigation.
For riderless bikes not sure what the new thing is but certainly seems unnecesary and would have all the same problems as cars. As for the statistics driverless cars have not been out in any significant usage to determine if they are safer or not. Controlled studies with engineers babysitting them aren't representative.
For riderless bikes not sure what the new thing is but certainly seems unnecesary and would have all the same problems as cars. As for the statistics driverless cars have not been out in any significant usage to determine if they are safer or not. Controlled studies with engineers babysitting them aren't representative.
#20
working on my sandal tan
Join Date: Aug 2011
Location: CID
Posts: 22,302
Bikes: 1991 Bianchi Eros, 1964 Armstrong, 1988 Diamondback Ascent, 1988 Bianchi Premio, 1987 Bianchi Sport SX, 1980s Raleigh mixte (hers), All-City Space Horse (hers)
Mentioned: 97 Post(s)
Tagged: 0 Thread(s)
Quoted: 3729 Post(s)
Liked 2,279 Times
in
1,431 Posts
As another article said about Hotz and Tesla's automated driving to make it 99% accurate is easy to make it 99.9999% accurate is much harder and these systems need to be that accurate. Also cars just stopping even 1% of the time would clog up every highway. There's an easy 200 cars going by at any time and if 2 are always stopped thats problems. But you're just pulling these numbers out of the air, don't think you mean 1% is ok. Further cars stopping in the middle of the road is not safe. If the error is on the person we can blame thats one thing but if the error is on the product that's endless litigation.
For riderless bikes not sure what the new thing is but certainly seems unnecesary and would have all the same problems as cars. As for the statistics driverless cars have not been out in any significant usage to determine if they are safer or not. Controlled studies with engineers babysitting them aren't representative.
For riderless bikes not sure what the new thing is but certainly seems unnecesary and would have all the same problems as cars. As for the statistics driverless cars have not been out in any significant usage to determine if they are safer or not. Controlled studies with engineers babysitting them aren't representative.
Furthermore, the video in the OP was clearly a joke/satire, but that didn't stop it from going over a bunch of heads.
#21
Junior Member
Join Date: Oct 2016
Posts: 9
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Likes: 0
Liked 0 Times
in
0 Posts
Come on ThermionicScott, tell me how you really feel about my opinion. 
That's certainly a very interesting response, but from a programmer's perspective, its significance is hard to quantify without knowing HOW it arrived at that answer.
Hope springs eternal, but I can't imagine an AI ever eclipsing the reasoning and decision-making ability of a human. The thing about humans is that we have something an AI can never have: imagination. It follows that we therefore also have insight. Because of this, no matter what situation arises, no matter how extreme the circumstance, no matter how unplanned the event, a human always has the capacity to evaluate the situation and develop some sort of rational response. It may not always be the best response, and it may not even be the correct response, but there will be a plan of action that takes the immediate factors of the situation into account.
I'm sure anyone who's ever spent any time with a computer or other electronic gadget (particularly if you've had to support, fix, or develop software) can relate to the concept of "babysitting" the software. You may have something that works very well, but every now and then there is that unforeseen scenario in which the software loses its "mind" and does something completely unpredicted or silly. Yet, a human might look at that scenario and know immediately what to do (or, at the very least, know that what the software is doing is NOT a good response).
The aviation industry has been into this field for a long time. Airplanes with the right equipment can practically fly themselves. But, we really can't trust the systems entirely. This point is driven into pilots throughout their training, and is reinforced through embarrassing incidents and tragic accidents. There are numerous instances in which an autopilot or flight control system will "throw its hands up" and simply quit flying the airplane (Air France 447, for instance), at which point the pilot (who has hopefully stayed in the loop, but often is not) must assume control. There are also other instances in which the computer does something strange and dangerous, something a human pilot would never do (though in most such cases, the computer will just give up and disengage, thanks to fail-safes built into the software).
Now, compare this to cars: quick, appropriate responses are FAR less important in an airplane, where a pilot (or computer) has the luxury of wandering around the sky a few thousand feet here or there, heading the wrong direction for a bit, not properly controlling airspeed, etc. That's a lot different from the world of driving, where we are regularly mere inches away from another car coming the opposite direction at a differential speed of 130mph+.
That's not to say I don't see driverless cars making a big impact in the future. If our road system is modified a bit and made more uniform, if the car is only expected to go on relatively improved roads, if GPS is always available and/or some sort of computer-friendly road marker system is devised, if extremely twisty or hilly roads can be properly dealt with by the computer, if specific types of inclement weather are avoided, and IF the driveless cars are kept in good repair (and we know some won't be), then they may be reasonably workable.
And, who knows, maybe there will be a paradigm shift in AI and we'll see something that completely bypasses the current AI pitfalls, and can truly challenge the human mind's ability to reason. I'm happy to be proven wrong.
Matt

That's certainly a very interesting response, but from a programmer's perspective, its significance is hard to quantify without knowing HOW it arrived at that answer.
Hope springs eternal, but I can't imagine an AI ever eclipsing the reasoning and decision-making ability of a human. The thing about humans is that we have something an AI can never have: imagination. It follows that we therefore also have insight. Because of this, no matter what situation arises, no matter how extreme the circumstance, no matter how unplanned the event, a human always has the capacity to evaluate the situation and develop some sort of rational response. It may not always be the best response, and it may not even be the correct response, but there will be a plan of action that takes the immediate factors of the situation into account.
I'm sure anyone who's ever spent any time with a computer or other electronic gadget (particularly if you've had to support, fix, or develop software) can relate to the concept of "babysitting" the software. You may have something that works very well, but every now and then there is that unforeseen scenario in which the software loses its "mind" and does something completely unpredicted or silly. Yet, a human might look at that scenario and know immediately what to do (or, at the very least, know that what the software is doing is NOT a good response).
The aviation industry has been into this field for a long time. Airplanes with the right equipment can practically fly themselves. But, we really can't trust the systems entirely. This point is driven into pilots throughout their training, and is reinforced through embarrassing incidents and tragic accidents. There are numerous instances in which an autopilot or flight control system will "throw its hands up" and simply quit flying the airplane (Air France 447, for instance), at which point the pilot (who has hopefully stayed in the loop, but often is not) must assume control. There are also other instances in which the computer does something strange and dangerous, something a human pilot would never do (though in most such cases, the computer will just give up and disengage, thanks to fail-safes built into the software).
Now, compare this to cars: quick, appropriate responses are FAR less important in an airplane, where a pilot (or computer) has the luxury of wandering around the sky a few thousand feet here or there, heading the wrong direction for a bit, not properly controlling airspeed, etc. That's a lot different from the world of driving, where we are regularly mere inches away from another car coming the opposite direction at a differential speed of 130mph+.
That's not to say I don't see driverless cars making a big impact in the future. If our road system is modified a bit and made more uniform, if the car is only expected to go on relatively improved roads, if GPS is always available and/or some sort of computer-friendly road marker system is devised, if extremely twisty or hilly roads can be properly dealt with by the computer, if specific types of inclement weather are avoided, and IF the driveless cars are kept in good repair (and we know some won't be), then they may be reasonably workable.
And, who knows, maybe there will be a paradigm shift in AI and we'll see something that completely bypasses the current AI pitfalls, and can truly challenge the human mind's ability to reason. I'm happy to be proven wrong.

Matt
#22
What happened?
Join Date: Jun 2007
Location: Around here somewhere
Posts: 8,050
Bikes: 3 Rollfasts, 3 Schwinns, a Shelby and a Higgins Flightliner in a pear tree!
Mentioned: 57 Post(s)
Tagged: 1 Thread(s)
Quoted: 1835 Post(s)
Liked 291 Times
in
254 Posts
I have no experience in the field as I stay on the road.
__________________
I don't know nothing, and I memorized it in school and got this here paper I'm proud of to show it.
#23
Member
Join Date: Feb 2016
Location: Moore, OK
Posts: 38
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Likes: 0
Liked 1 Time
in
1 Post
I'm amused that a few folks are still thinking the original video was serious. 
ThermionicScott, I wouldn't say I have a huge amount of faith in the ability of humans to make good decisions on the road, I just have faith in the tendency of them to make reasonably predictable decisions (vs completely wild and counter-intuitive decisions). Plus, while humans may not always make the best decisions, by taking on the responsibility for the drive into your own hands, your level of risk is more closely tied to your own personal decisions. And I think that is always a good thing. I like known risk: my own skills, condition, and decision-making ability. I don't like unknown risk: the minds of the programmers who designed the car's AI to foresee or somehow allow for every possible circumstance that could ever arise, which of course is impossible.
I'm not really disagreeing with you or trying to invalidate your point of view, I'm just trying to clarify my approach to the whole idea. We differ in where we want to allocate risk.
Imnotchinese, you're hitting on a very interesting paradox of computers vs the human mind. There are certain procedural, logical operations that computers are far superior at. For instance, if an AI-controlled car has information on all nearby vehicles, a sufficiently powerful computer can, in a matter of milliseconds, simulate a number of potential outcomes depending on different courses of action, and choose the one that would seem to have the lowest risk based on pre-programmed conditions and the laws of physics. Furthermore, if the computer is able to accurately measure things like distance, relative speed, direction, etc, it can do very accurate calculations for things like required stopping distance and likely path of travel. But the human mind can do other things in near-instant time, things that continue to baffle our understanding (such as highly abstract associations, inference, etc). Simple example: two people who know each other very well are talking. One says something rather vague and poorly worded, like "it's going to be worth it", when such a statement has no bearing on the current conversation. But the other person will, in their mind, instantly go to something they know the first person is working at, or a goal the two friends were recently trying to accomplish, and make a pretty accurate guess that that's the thing the first person was talking about. Or it may be as simple as a comment that goes back to a conversation the two had some minutes ago. Oftentimes, the mind is somehow able to pick up that comment and associate it almost instantly with one of perhaps many recent conversations.
These truly disparate strengths of humans vs computers are why, while I'm very skeptical of an AI-controlled car, I think a computer-aided car (as we are already seeing) has huge potential. Imagine having graphical vectors of other traffic's movement projected onto your windshield, or markers showing where you'll likely stop if you apply maximum braking!
Ah, this is interesting stuff.
I've been interested in AI and software failure modes for some time. My career is in computers, and I've been programming for fun since 2002 or so. Being "passionately curious" is a good thing, I think it's the most important component of learning.
Matt

ThermionicScott, I wouldn't say I have a huge amount of faith in the ability of humans to make good decisions on the road, I just have faith in the tendency of them to make reasonably predictable decisions (vs completely wild and counter-intuitive decisions). Plus, while humans may not always make the best decisions, by taking on the responsibility for the drive into your own hands, your level of risk is more closely tied to your own personal decisions. And I think that is always a good thing. I like known risk: my own skills, condition, and decision-making ability. I don't like unknown risk: the minds of the programmers who designed the car's AI to foresee or somehow allow for every possible circumstance that could ever arise, which of course is impossible.
I'm not really disagreeing with you or trying to invalidate your point of view, I'm just trying to clarify my approach to the whole idea. We differ in where we want to allocate risk.
Imnotchinese, you're hitting on a very interesting paradox of computers vs the human mind. There are certain procedural, logical operations that computers are far superior at. For instance, if an AI-controlled car has information on all nearby vehicles, a sufficiently powerful computer can, in a matter of milliseconds, simulate a number of potential outcomes depending on different courses of action, and choose the one that would seem to have the lowest risk based on pre-programmed conditions and the laws of physics. Furthermore, if the computer is able to accurately measure things like distance, relative speed, direction, etc, it can do very accurate calculations for things like required stopping distance and likely path of travel. But the human mind can do other things in near-instant time, things that continue to baffle our understanding (such as highly abstract associations, inference, etc). Simple example: two people who know each other very well are talking. One says something rather vague and poorly worded, like "it's going to be worth it", when such a statement has no bearing on the current conversation. But the other person will, in their mind, instantly go to something they know the first person is working at, or a goal the two friends were recently trying to accomplish, and make a pretty accurate guess that that's the thing the first person was talking about. Or it may be as simple as a comment that goes back to a conversation the two had some minutes ago. Oftentimes, the mind is somehow able to pick up that comment and associate it almost instantly with one of perhaps many recent conversations.
These truly disparate strengths of humans vs computers are why, while I'm very skeptical of an AI-controlled car, I think a computer-aided car (as we are already seeing) has huge potential. Imagine having graphical vectors of other traffic's movement projected onto your windshield, or markers showing where you'll likely stop if you apply maximum braking!
Ah, this is interesting stuff.

Matt
#24
working on my sandal tan
Join Date: Aug 2011
Location: CID
Posts: 22,302
Bikes: 1991 Bianchi Eros, 1964 Armstrong, 1988 Diamondback Ascent, 1988 Bianchi Premio, 1987 Bianchi Sport SX, 1980s Raleigh mixte (hers), All-City Space Horse (hers)
Mentioned: 97 Post(s)
Tagged: 0 Thread(s)
Quoted: 3729 Post(s)
Liked 2,279 Times
in
1,431 Posts
ThermionicScott, I wouldn't say I have a huge amount of faith in the ability of humans to make good decisions on the road, I just have faith in the tendency of them to make reasonably predictable decisions (vs completely wild and counter-intuitive decisions). Plus, while humans may not always make the best decisions, by taking on the responsibility for the drive into your own hands, your level of risk is more closely tied to your own personal decisions. And I think that is always a good thing. I like known risk: my own skills, condition, and decision-making ability. I don't like unknown risk: the minds of the programmers who designed the car's AI to foresee or somehow allow for every possible circumstance that could ever arise, which of course is impossible.
I'm not really disagreeing with you or trying to invalidate your point of view, I'm just trying to clarify my approach to the whole idea. We differ in where we want to allocate risk.
Imnotchinese, you're hitting on a very interesting paradox of computers vs the human mind. There are certain procedural, logical operations that computers are far superior at. For instance, if an AI-controlled car has information on all nearby vehicles, a sufficiently powerful computer can, in a matter of milliseconds, simulate a number of potential outcomes depending on different courses of action, and choose the one that would seem to have the lowest risk based on pre-programmed conditions and the laws of physics. Furthermore, if the computer is able to accurately measure things like distance, relative speed, direction, etc, it can do very accurate calculations for things like required stopping distance and likely path of travel. But the human mind can do other things in near-instant time, things that continue to baffle our understanding (such as highly abstract associations, inference, etc). Simple example: two people who know each other very well are talking. One says something rather vague and poorly worded, like "it's going to be worth it", when such a statement has no bearing on the current conversation. But the other person will, in their mind, instantly go to something they know the first person is working at, or a goal the two friends were recently trying to accomplish, and make a pretty accurate guess that that's the thing the first person was talking about. Or it may be as simple as a comment that goes back to a conversation the two had some minutes ago. Oftentimes, the mind is somehow able to pick up that comment and associate it almost instantly with one of perhaps many recent conversations.
These truly disparate strengths of humans vs computers are why, while I'm very skeptical of an AI-controlled car, I think a computer-aided car (as we are already seeing) has huge potential. Imagine having graphical vectors of other traffic's movement projected onto your windshield, or markers showing where you'll likely stop if you apply maximum braking!
Ah, this is interesting stuff.
I've been interested in AI and software failure modes for some time. My career is in computers, and I've been programming for fun since 2002 or so. Being "passionately curious" is a good thing, I think it's the most important component of learning.
Matt
I'm not really disagreeing with you or trying to invalidate your point of view, I'm just trying to clarify my approach to the whole idea. We differ in where we want to allocate risk.
Imnotchinese, you're hitting on a very interesting paradox of computers vs the human mind. There are certain procedural, logical operations that computers are far superior at. For instance, if an AI-controlled car has information on all nearby vehicles, a sufficiently powerful computer can, in a matter of milliseconds, simulate a number of potential outcomes depending on different courses of action, and choose the one that would seem to have the lowest risk based on pre-programmed conditions and the laws of physics. Furthermore, if the computer is able to accurately measure things like distance, relative speed, direction, etc, it can do very accurate calculations for things like required stopping distance and likely path of travel. But the human mind can do other things in near-instant time, things that continue to baffle our understanding (such as highly abstract associations, inference, etc). Simple example: two people who know each other very well are talking. One says something rather vague and poorly worded, like "it's going to be worth it", when such a statement has no bearing on the current conversation. But the other person will, in their mind, instantly go to something they know the first person is working at, or a goal the two friends were recently trying to accomplish, and make a pretty accurate guess that that's the thing the first person was talking about. Or it may be as simple as a comment that goes back to a conversation the two had some minutes ago. Oftentimes, the mind is somehow able to pick up that comment and associate it almost instantly with one of perhaps many recent conversations.
These truly disparate strengths of humans vs computers are why, while I'm very skeptical of an AI-controlled car, I think a computer-aided car (as we are already seeing) has huge potential. Imagine having graphical vectors of other traffic's movement projected onto your windshield, or markers showing where you'll likely stop if you apply maximum braking!
Ah, this is interesting stuff.

Matt

If you were severely nearsighted and wanted to do away with your glasses or contacts, would you rather have LASIK or radial keratotomy?

#25
Member
Join Date: Feb 2016
Location: Moore, OK
Posts: 38
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Likes: 0
Liked 1 Time
in
1 Post
Hmmm if I were nearsighted and wanted to do away with my glasses, I would probably rather avoid the risk of surgery altogether and go with the risk of walking into a post from not wearing them. 
Matt

Matt