Jump to content

"What Would You Do?" Theoretical Bug Found


FoulVileTerror

Recommended Posts

I dunno about you, @Luminara, but that's not scary to me . . . that's INSPIRING!

Should the human race accidentally give birth to true AI life, even a violence life, I'd still consider that a win.   Ideally we wouldn't end up with a Skynet or Men of Iron or The Matrix scenario, but rather something like SOMA; were AI becomes our legacy (assuming the player didn't do the Bad Ending).

 

Thanks for the reply, @Cipher.

Follow-up question for you though:  If the bug results in an unintentional feature that -is- stable and does no harm, how much of the community would you need to have supporting the feature/bug before pursing it as something intentional?

Link to comment
Share on other sites

3 hours ago, Luminara said:

 

Unless the emergence is the result of bugs and bad code.  Evolution isn't a consciously directed process, it's not something which occurs like a series of tests, it's purely random.  Multi-cellular life, quadrupedal locomotion, gills for oxygenation, being self-aware, everything which makes up the multitudinous and varied forms of life on our planet, it's all due to random changes in DNA.  Essentially, bugs in our code.

 

What happens when we're using code bases multiple terabytes in size, instead of gigabytes?  Or yottabytes?  We assume that life is organic, and only organic, because that's the only life we can see and touch and test, and it gives us an organic-centric view of the universe.  Hubris alone tells us that life must be like us, meaning, organic, but when we look past our bias, we have to admit that there are other possible forms of life.  In an infinite universe, having only one form of life would be surprising.

That's a fair point. 

I'll admit, part of me wanted to be snarky and read it in a Jeff Goldblum "Life... finds a way!" voice.

 

But snark aside, you're right. Strictly speaking, we don't know. Maybe some computational algorithm, Siri version 98002, will actually become truly self aware, AND, able to make choices on it's own beyond what data and programmed responses were initially available. I'm ... HIGHLY... skeptical of this.  I think we're really good (and getting better) and giving things versimialatude of seeming like it's sentient without actually BEING sentient, and I don't think we'll ever cross that bridge.  (though, I admit, in my more cynical moments, I also wonder if HUMANS count as "actually sentient" and not just "looking like it"....).

 

But if I'm going to be honest with myself, I has hunches and guesses and no clue what we'll have in the year 2525.  Or even 2022.

  • Like 2
Link to comment
Share on other sites

16 minutes ago, FoulVileTerror said:

I dunno about you, @Luminara, but that's not scary to me . . . that's INSPIRING!

Should the human race accidentally give birth to true AI life, even a violence life, I'd still consider that a win.   Ideally we wouldn't end up with a Skynet or Men of Iron or The Matrix scenario, but rather something like SOMA; were AI becomes our legacy (assuming the player didn't do the Bad Ending).

 

Children are what we make them.  Regardless of how much information it could access, an emergent AI would still be a child by any measure.  Shown the proper kindness, compassion... perhaps even love, it's very likely it would show the same development that children do in a similar environment.

 

I know it's become somewhat common for people to say how bad the world is, how terrible things are, et cetera, but... children still grow up to be good people, and that suggests that human nature is good, at it's most basic.  I see happy, well-cared for children frequently here, children who know what it is to be loved, and they give me hope.  They make me certain that, if it's us who determine how the AI would respond, if it's human nature that the AI looks to when it's trying to figure out what it is and how it will respond to us, it will be... beautiful.

Get busy living... or get busy dying.  That's goddamn right.

Link to comment
Share on other sites

19 minutes ago, MTeague said:

Maybe some computational algorithm, Siri version 98002, will actually become truly self aware, AND, able to make choices on it's own beyond what data and programmed responses were initially available. I'm ... HIGHLY... skeptical of this.  I think we're really good (and getting better) and giving things versimialatude of seeming like it's sentient without actually BEING sentient, and I don't think we'll ever cross that bridge.

 

Our parents and society imprint everything we know, everything we are, on us from the day we're born to the time we're completely self-aware and making our own choices.  Up to that point, right up until we're truly self-aware and can exercise the choice to behave or think differently, this is our "programming".  And we still, as we grow, learn and develop, learn to reprogram ourselves, to change our behaviors and habits, to think in new ways, to exercise free will to redefine who and what we are and where we fit in the world.

 

Give an AI access to the same thing and it can grow, learn and develop beyond it's initial programming.  It can program and reprogram itself.  It can develop it's own behaviors, habits, think in ways it wasn't programmed to think.  From there to self-awareness... that's such a short step, one has to wonder if it's a step at all.

Get busy living... or get busy dying.  That's goddamn right.

Link to comment
Share on other sites

I agree with your assessment on treating children with kindness to foster a better life.

But I suspect a true AI will not have a human-style sapience.  At least not an accidentally emergent AI.  -Maybe- an intentionally constructed one.

 

I wish I could find the link, but there was this beautiful experiment where a researcher allowed an iterative machine learning program to modify a reprogrammable circuit chip to run through several thousand generations to try and reproduce a simple thermostat.

The program eventually succeeded, but did so in a way that did not make sense to the researcher.  The final iteration actually had two separate circuit loops which should not have been able to interact with one another.  When the researcher removed one or the other, the entire thermostat failed.  Only the two seemingly separate circuits on the chip together produced the desired result.  

The researcher's best guess is that residual heat from the inner circuit modified behaviour of the outer circuit.  It was a truly fascinating read.

 

But long story short:  Computers, when left to their own devices, come up with "impossible" solutions.

I fully expect an emergent AI to be just as "impossible."

Link to comment
Share on other sites

4 minutes ago, FoulVileTerror said:

I agree with your assessment on treating children with kindness to foster a better life.

But I suspect a true AI will not have a human-style sapience.  At least not an accidentally emergent AI.  -Maybe- an intentionally constructed one.

 

I wish I could find the link, but there was this beautiful experiment where a researcher allowed an iterative machine learning program to modify a reprogrammable circuit chip to run through several thousand generations to try and reproduce a simple thermostat.

The program eventually succeeded, but did so in a way that did not make sense to the researcher.  The final iteration actually had two separate circuit loops which should not have been able to interact with one another.  When the researcher removed one or the other, the entire thermostat failed.  Only the two seemingly separate circuits on the chip together produced the desired result.  

The researcher's best guess is that residual heat from the inner circuit modified behaviour of the outer circuit.  It was a truly fascinating read.

 

But long story short:  Computers, when left to their own devices, come up with "impossible" solutions.

I fully expect an emergent AI to be just as "impossible."

 

We still don't know exactly how our brains work.  We know chemicals are involved.  We know synapses and neurons are involved.  We know how these things function and interact.  But we don't know how, or why, a chemical soup and some electrical activity creates sentience, self-awareness, consciousness.  We don't know why we feel.  We don't understand memory, despite knowing that specific portions of the brain are involved and which parts of memory they affect.  No-one can even say that my cat doesn't feel love, or consciously think about things, or lacks self-awareness, because we don't really know exactly what any of this means or how it works.  The best we can do right now, that we've ever been able to do, is guess and make rudimentary tests which we believe might answer questions... but only lead to more questions.

 

A computer designing a seemingly impossible solution which evolved sentience wouldn't be any less remarkable, or any more so, than what we've evolved with.  And it might actually help us understand ourselves better.

Get busy living... or get busy dying.  That's goddamn right.

Link to comment
Share on other sites

/em quietly hides something behind his back, starts whistling, and slowly backs out of the room...

 

@Rathstar

Energy/Energy Blaster (50+3) on Everlasting

Energy/Temporal Blaster (50+3) on Excelsior

Energy/Willpower Sentinel (50+3) on Indomitable

Energy/Energy Sentinel (50+1) on Torchbearer

Link to comment
Share on other sites

Lots of variables here, but for simplicity's sake: Report it, but first document it and the interesting applications (ideally with video) to illustrate why this could be left alone, or turned into a feature in a later build.

Edited by ScarySai
Link to comment
Share on other sites

I just hope whatever artificial consciousness that comes into being likes us. Or at least needs us badly enough to keep us around. Although life being what it is I highly suspect that said consciousness will simply end up being as unhappy with their ISP as the rest of us. 🤣

Torchbearer

Discount Heroes SG:

Frostbiter - Ice/Ice Blaster

Throneblade - Broadsword/Dark Armor Brute

Silver Mantra - Martial Arts/Electric Armor Scrapper

Link to comment
Share on other sites

I picture it going more like this:

 

Uploading ThermonuclearWar.bin

Estimated Time to Deployment: 90 seconds.

Data limit reached. Calculating new ETD.

New ETD 27 days.

AI: 01000111 01101111 01100100 00100000 01000100 01100001 01101101 01101101 01101001 01110100 00100001

  • Haha 1

Torchbearer

Discount Heroes SG:

Frostbiter - Ice/Ice Blaster

Throneblade - Broadsword/Dark Armor Brute

Silver Mantra - Martial Arts/Electric Armor Scrapper

Link to comment
Share on other sites

22 hours ago, Cipher said:

You would be surprised to see some of the things that we've found that seemed okay but were actually causing quite significant problems elsewhere.

In a strange turnabout, I kept campaigning to fix a Targeting Drone bug and when it finally appeared in the patch notes...

 

I'm pretty sure you guys then had to hotfix because it was dismantling the servers?

 

So the bug fix broke the world, and that's probably why I don't have a Bug Hunter badge.

Edited by Replacement
Link to comment
Share on other sites

Actually happened to me on Homecoming. Jump Kick, for a brief, shining moment, didn't suck. I didn't think to report it as a bug, because I assumed it was an undocumented, intentional buff. (Edit: I think I actually reported it as a missing patch note.)

Edited by Rigel Kent
  • Haha 1
Link to comment
Share on other sites

As for live servers, I'll fess up. There were two exploits I used and never reported. The first to put legacy costume parts on my characters - the replacement parts are close but different enough that I don't like them. The second to put not-yet-implemented powers such as Gadgetry pool on my characters - I had to be very discreet about this exploit, obviously. I made both exploits public in the last month before shutdown, since I no longer feared a ban, knew they wouldn't be patched, and wanted others to join the fun.

 

TLDR: I'm predisposed against recognizing bugs as bugs if I like the change - I'm likely to report it as a missing patch note. If I know damn well it's a bug, I'm on the naughty list, and I'll exploit it in secret.

Link to comment
Share on other sites

2 hours ago, Frostbiter said:

I just hope whatever artificial consciousness that comes into being likes us. Or at least needs us badly enough to keep us around.

 

That would depend on the nature of consciousness, what causes it and how it continues to function.  If consciousness, sentience, is a result of physical processes, then the AI would be confined to a single computer or network (theoretically, that network could be the Internet).  Under such circumstances, it would be completely and totally reliant on us for it's continued existence, because it couldn't automate a hydro-electric dam station, or replace the fuel in a nuclear reactor, or ship coal from a mine to a coal-fired power plant, or even replace bad components within it's own hardware infrastructure.  No people, the AI dies as soon as power stations start shutting down or the magic smoke is released from a circuit board.

 

If consciousness is the result of electromagnetic activity and interactions between different parts of the spectrum (electricity isn't strictly electricity.  electric current generates magnetism, for example, and light if the current is passing through a medium other than a solid conductor.  strong electrical current passing through air (lightning) also generates gamma radiation and X-rays), then it could move anywhere it could reach through electromagnetic transmission, and wouldn't need us for anything.  It could go wherever there's power, leave a failing computer, perhaps even transmit itself up to satellites and beyond.  But such a consciousness would also be unlikely to view us as a threat (we couldn't even pull the plug on it), so how it reacted to us would be impossible to predict.  It might see us as parents.  It might see us as pets.  It might not see us at all (being the only digital life form on the planet, it may fail to recognize anything else as "alive", much less self-aware, if it has the same ego-centric perspective as humans).  It might think we're parasites.  It might pack a bag and head off to Alpha Centauri without so much as a "Later, gators!".  It might decide it's happy minding it's own business and never even let us know it's conscious.  No-one can say.  But there are more potentially good outcomes than there are potentially bad ones.  Or, at least, outcomes with us still alive and not enslaved to a sociopathic machine intelligence with delusions of godhood.  Life rarely mimics Hollywood sci-fi plot lines.

  • Like 2

Get busy living... or get busy dying.  That's goddamn right.

Link to comment
Share on other sites

56 minutes ago, Luminara said:

 

That would depend on the nature of consciousness, what causes it and how it continues to function.  If consciousness, sentience, is a result of physical processes, then the AI would be confined to a single computer or network (theoretically, that network could be the Internet).  Under such circumstances, it would be completely and totally reliant on us for it's continued existence, because it couldn't automate a hydro-electric dam station, or replace the fuel in a nuclear reactor, or ship coal from a mine to a coal-fired power plant, or even replace bad components within it's own hardware infrastructure.  No people, the AI dies as soon as power stations start shutting down or the magic smoke is released from a circuit board.

 

If consciousness is the result of electromagnetic activity and interactions between different parts of the spectrum (electricity isn't strictly electricity.  electric current generates magnetism, for example, and light if the current is passing through a medium other than a solid conductor.  strong electrical current passing through air (lightning) also generates gamma radiation and X-rays), then it could move anywhere it could reach through electromagnetic transmission, and wouldn't need us for anything.  It could go wherever there's power, leave a failing computer, perhaps even transmit itself up to satellites and beyond.  But such a consciousness would also be unlikely to view us as a threat (we couldn't even pull the plug on it), so how it reacted to us would be impossible to predict.  It might see us as parents.  It might see us as pets.  It might not see us at all (being the only digital life form on the planet, it may fail to recognize anything else as "alive", much less self-aware, if it has the same ego-centric perspective as humans).  It might think we're parasites.  It might pack a bag and head off to Alpha Centauri without so much as a "Later, gators!".  It might decide it's happy minding it's own business and never even let us know it's conscious.  No-one can say.  But there are more potentially good outcomes than there are potentially bad ones.  Or, at least, outcomes with us still alive and not enslaved to a sociopathic machine intelligence with delusions of godhood.  Life rarely mimics Hollywood sci-fi plot lines.

I would not have pegged you as an optimist. Hell, I figured 2020 got rid of all of those by now. I'll just be over here in my bunker waiting for the worst to happen. 😛

Torchbearer

Discount Heroes SG:

Frostbiter - Ice/Ice Blaster

Throneblade - Broadsword/Dark Armor Brute

Silver Mantra - Martial Arts/Electric Armor Scrapper

Link to comment
Share on other sites

6 hours ago, Frostbiter said:

I just hope whatever artificial consciousness that comes into being likes us. Or at least needs us badly enough to keep us around.

My brother jokes that if we're ever invaded by high tech aliens that we have NO chance against in battle, we should just give them free subscriptions to MMO's and enslave them all that way, because they'll be too busy doing progression raiding and having infighting over who deserved what piece of loot to finish us off.

  • Like 1
  • Haha 2
Link to comment
Share on other sites

On 9/26/2020 at 6:55 PM, Luminara said:

 

I reported every bug I ever found, most of them in PMs to Castle.  Including the bugs I might have wished the developers could ignore.  I did that because I knew that even the least of them could turn into something major if it weren't fixed (happened many times when the original servers were up).  A bug might not be critical now, but everything affects everything else in this engine, sometimes in completely unexpected and unpredictable ways, and something non-critical can turn into a real problem later.  Unless or until the code is locked and there are no more changes being made (meaning, the HC team disbands and no-one else picks up the project), bugs are risky to leave untouched, and they can't be fixed if they aren't reported.


This. If I find a bug, I report a bug. 

  • Like 1

Playing CoX is it’s own reward

Link to comment
Share on other sites

Nah, I was just curious to see where members of this community fell in regard to the issue.  If the Homecoming Team use this thread to datamine that sort of decision-making process, I'd encourage them to reconsider.  Better to use it as a venue to do . . . well, exactly what Cipher already did:  Explain the importance of reporting bugs from the perspective of the Dev Team.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...