Post by ngnius on Oct 20, 2017 15:29:49 GMT
Credits to The Compiler for the original idea
Basically, as science gets more complex and hard to understand, due to the amount of prior knowledge required to understand it and the amount of work required to make a new discovery or confirm a claim, AIs will be the best option to do the work, almost completely removing humans (and our horrible memory) from the scientific community.
Basically, as science gets more complex and hard to understand, due to the amount of prior knowledge required to understand it and the amount of work required to make a new discovery or confirm a claim, AIs will be the best option to do the work, almost completely removing humans (and our horrible memory) from the scientific community.
{Discord discussion (partly about human designed AI limits too)}
[10:42 AM] NGnius: I don't think we really nead AI inspired gene editing - we already know how to do it. It might be a more interesting topic if we broaden it to AI research (AIs researching different ideas for us) is the future
[10:43 AM] Not the complier: No I meant copying patterns invented by AI into genes
[10:44 AM] Not the complier: I remember reading about a computer generated mathematical proof which is not possible for any human to verify because it's so long or complicated or something
[10:45 AM] NGnius: Yeh, ik. It would only speed it up a bit (if at all). But we already have normal computer programs to find where to edit a gene, and what to replace it with(edited)
[10:46 AM] NGnius: Mathematical proofs are a great application of AI - they aren't prone to error as much as humans are, especially when doing a lot of operations at once, or remembering a lot of things(edited)
[10:48 AM] Not the complier: Yeah the article expressed worry about can we really rely on computers to get proofs which we take on faith
[10:48 AM] Not the complier: I don't get it, I already trust my calculator
[10:49 AM] NGnius: The issue as code gets bigger and more complicated, we might write something that isn't quite right that causes the code to not execute exactly as we want
[10:49 AM] NGnius: The simple example of this is the "off by one error"
[10:50 AM] Not the complier: Well there's always bugs
[10:51 AM] Not the complier: Oh
[10:51 AM] NGnius: Yeh, and that results in uncertainty
[10:51 AM] Not the complier: Testing is gonna become so important
[10:51 AM] NGnius: It's impossible to test an AI if it can learn
[10:51 AM] Not the complier: I gotta take a course
[10:52 AM] Not the complier: Yeah how does one go about testing an AI for bugs
[10:53 AM] NGnius: Assuming the AI has unlimited potential for learning, the amount of testing you have to do to test every possibility that the AI can do is infinite
[10:53 AM] NGnius: You can test specific parts of the AI though, which is the best you can ask for
[10:54 AM] Not the complier: Millions of possibilities is always a problem for testers
[10:54 AM] NGnius: ei test that it's interpreting the given data right, or that it's learning components actually allow it to learn perfectly
NGnius pinned a message to this channel. See all the pins.Today at 10:55 AM
[10:55 AM] Not the complier: That works for the AI output
[10:56 AM] Not the complier: But how can one test* the A I's program(edited)
[10:56 AM] NGnius: You can't, unless you have infinite time before you run it for real
[10:59 AM] Not the complier: It just destroys the concept of code quality as we know it
[10:59 AM] NGnius: "code quality" lol that doesn't exist
[10:59 AM] NGnius: How many games are bug-free?
[11:00 AM] Not the complier: It's never hundred percent of course, but pretty close
[11:00 AM] Macecurb: "Code quality" need not mean "bug-free". Documentation, good commenting, clear variable names, that sort of thing.
[11:00 AM] NGnius: Yeh, but that's still possible with AI coding... even if you teach the AI to code itself
[11:00 AM] Not the complier: And using tabs
[11:00 AM] NGnius: #tabsnotspaces
[11:01 AM] Not the complier: That's not possible if AI writes a whole lot of code
[11:02 AM] Macecurb: You could always teach the AI to use tabs.
[11:02 AM] Not the complier: What's the point, nobody's gonna read it
[11:02 AM] NGnius: depending on what language you use, indentation isn't optional
[11:02 AM] NGnius: And documentation is just another thing to teach the AI to do, no biggy
[11:03 AM] Not the complier: ... I don't think that'll happen even in the near future where AI programs are ubiquitous
[11:04 AM] Macecurb: It could. Ideally, it should.
[11:04 AM] NGnius: It's not that far off
[11:04 AM] NGnius: It's the next logical step after teaching AIs how to learn - how to learn everything
[10:42 AM] NGnius: I don't think we really nead AI inspired gene editing - we already know how to do it. It might be a more interesting topic if we broaden it to AI research (AIs researching different ideas for us) is the future
[10:43 AM] Not the complier: No I meant copying patterns invented by AI into genes
[10:44 AM] Not the complier: I remember reading about a computer generated mathematical proof which is not possible for any human to verify because it's so long or complicated or something
[10:45 AM] NGnius: Yeh, ik. It would only speed it up a bit (if at all). But we already have normal computer programs to find where to edit a gene, and what to replace it with(edited)
[10:46 AM] NGnius: Mathematical proofs are a great application of AI - they aren't prone to error as much as humans are, especially when doing a lot of operations at once, or remembering a lot of things(edited)
[10:48 AM] Not the complier: Yeah the article expressed worry about can we really rely on computers to get proofs which we take on faith
[10:48 AM] Not the complier: I don't get it, I already trust my calculator
[10:49 AM] NGnius: The issue as code gets bigger and more complicated, we might write something that isn't quite right that causes the code to not execute exactly as we want
[10:49 AM] NGnius: The simple example of this is the "off by one error"
[10:50 AM] Not the complier: Well there's always bugs
[10:51 AM] Not the complier: Oh
[10:51 AM] NGnius: Yeh, and that results in uncertainty
[10:51 AM] Not the complier: Testing is gonna become so important
[10:51 AM] NGnius: It's impossible to test an AI if it can learn
[10:51 AM] Not the complier: I gotta take a course
[10:52 AM] Not the complier: Yeah how does one go about testing an AI for bugs
[10:53 AM] NGnius: Assuming the AI has unlimited potential for learning, the amount of testing you have to do to test every possibility that the AI can do is infinite
[10:53 AM] NGnius: You can test specific parts of the AI though, which is the best you can ask for
[10:54 AM] Not the complier: Millions of possibilities is always a problem for testers
[10:54 AM] NGnius: ei test that it's interpreting the given data right, or that it's learning components actually allow it to learn perfectly
NGnius pinned a message to this channel. See all the pins.Today at 10:55 AM
[10:55 AM] Not the complier: That works for the AI output
[10:56 AM] Not the complier: But how can one test* the A I's program(edited)
[10:56 AM] NGnius: You can't, unless you have infinite time before you run it for real
[10:59 AM] Not the complier: It just destroys the concept of code quality as we know it
[10:59 AM] NGnius: "code quality" lol that doesn't exist
[10:59 AM] NGnius: How many games are bug-free?
[11:00 AM] Not the complier: It's never hundred percent of course, but pretty close
[11:00 AM] Macecurb: "Code quality" need not mean "bug-free". Documentation, good commenting, clear variable names, that sort of thing.
[11:00 AM] NGnius: Yeh, but that's still possible with AI coding... even if you teach the AI to code itself
[11:00 AM] Not the complier: And using tabs
[11:00 AM] NGnius: #tabsnotspaces
[11:01 AM] Not the complier: That's not possible if AI writes a whole lot of code
[11:02 AM] Macecurb: You could always teach the AI to use tabs.
[11:02 AM] Not the complier: What's the point, nobody's gonna read it
[11:02 AM] NGnius: depending on what language you use, indentation isn't optional
[11:02 AM] NGnius: And documentation is just another thing to teach the AI to do, no biggy
[11:03 AM] Not the complier: ... I don't think that'll happen even in the near future where AI programs are ubiquitous
[11:04 AM] Macecurb: It could. Ideally, it should.
[11:04 AM] NGnius: It's not that far off
[11:04 AM] NGnius: It's the next logical step after teaching AIs how to learn - how to learn everything