What are the risks associated with regulating AI? Ask Question
Asked 6 years, 5 months ago Modified today Viewed 253 times
Asked 6 years, 5 months ago
3 $\begingroup$ As part of a research project for college, I would like to understand what many of you astern to be the risks associated with regulating Artificial Intelligence. Such as whether regulation is too risky in regards to limiting progress or too risky in regards to uninformed regulation. philosophy social risk-management Share Improve this question Follow edited Nov 21, 2019 at 3:25 nbro 43.5k 14 14 gold badges 122 122 silver badges 222 222 bronze badges asked Oct 21, 2019 at 3:17 JsAdam 31 1 1 bronze badge $\endgroup$ 2 1 $\begingroup$ I took the link to the survey out of the main text of the question, but I think it would be ok to post in the comments: docs.google.com/forms/d/e/… $\endgroup$ DukeZhou – DukeZhou 2019-10-21 21:21:47 +00:00 Commented Oct 21, 2019 at 21:21 1 $\begingroup$ Here is a link to a survey I put together. Also as part of the project. If you have the time to look at it please do! Thank you! :) docs.google.com/forms/d/… $\endgroup$ JsAdam – JsAdam 2019-10-21 22:25:33 +00:00 Commented Oct 21, 2019 at 22:25 Add a comment | 5 Answers 5 Sorted by: Reset to default Highest score (default) Date modified (newest first) Date created (oldest first) 1 $\begingroup$ I don't think regulating something necessarily causes that regulation to defacto become a "risk". Regulation - including overregulation - may, in fact, aid in the dialogue between practitioners, which may end up educating the regulators, the public and the practitioners themselves. My answers to your survey would most likely be "it depends...", or "no risk", which isn't to say it's not an impediment, but just not a "risk", per se. Share Improve this answer Follow answered Nov 21, 2019 at 22:51 Mike S. 11 1 1 bronze badge $\endgroup$ Add a comment | 0 $\begingroup$ Risks of regulation? As you mention in your survey, it is generally understood that the primary concern with regulating AI research is that other parties risk falling behind. Should we regulate it? Can it be done? You can't really "regulate" technological development in the same way you can regulate some other things in general. Asides from the fact that there is no global governance that can implement this regulation on nations, you can't really regulate someone's research more than you can control how people think: you just need a pen / paper / computer to do any research in math/AI. The NSA tried to regulate encryption citing national security reasons during a saga known as the Crypto Wars . They failed. What is AI anyways? How will we get there? What will it be like? Honestly, from the phrasing of your questions in your survey, I get the impression that you don't really understand the hypothetical existential risk due to AI. Personally I don't really buy into their thesis, but in any case, if such a super-intelligent agent emerges, the problem isn't so much "oh no my city is destroyed" or "oh no so many people are killed", but more so "all of humanity is enslaved without being aware" or "everything is dead". We think this might happen because we assume AI is all-powerful and we project our own negative qualities onto this unknown agent with unknown power. It's mostly fear really. This is all speculation, and by definition you cannot predict the behavior of an agent smarter than you, so literally every single comment on this topic is purely unbased speculation. The only thing that is true is that we don't know. There is another aspect of AI which is dangerous, which more so concerns with how humans use it: i.e. facial recognition, automated weapon systems, automated hacking. These are more pressing issues. What should we do? We are forced to research AI because no party can afford to fall behind, but at the same time we are pushing ourselves towards a dangerous future: it's a catch-22.... Consensus and current practice suggests that every researcher publicizes our results. Compared to other areas of academia, whose research is often locked behind paywall, ML/AI research is quite publicly accessible. Of course, this doesn't prevent the possibility of a rouge agent.... Share Improve this answer Follow edited Oct 21, 2019 at 23:17 answered Oct 21, 2019 at 20:57 k.c. sayz 'k.c sayz' 2,131 13 13 silver badges 27 27 bronze badges $\endgroup$ Add a comment | 0 $\begingroup$ I think there is a very strong argument for regulating AI. Chiefly, unintentional (or intentional) bias in statistically driven algorithms, and the idea that responsibility can be offloaded to processes that cannot be meaningfully punished where they transgress. Additionally, the history of technology, especially since the industrial revolution, strongly validates neo-luddism in the sense that the problems arising from implementation of new technology are not always predictable. In this sense, there are both ethical reasons to consider regulation, and minimax reasons (here in the sense of erring on the side of caution to minimize the maximum potential downside.) Risk of falling behind A risk is that not all participants will hew to the regulations, giving those who don't a significant advantage, but, that, in and of itself, is not a reason to forgo sensible regulation. However, this is not a justification to forgo regulation in that that penalties at least serve as potential deterrent. Opportunity cost Not a risk, but a driver. The idea of "leaving money on the table" in that not implementing a given technology forgoes greater utility, sacrificing potential benefit. This is not invalid, but shouldn't ignore hidden costs. For instance, the wide-scale deployment of even primitive bots has had a profound social impact. Share Improve this answer Follow edited Oct 22, 2019 at 1:09 answered Oct 21, 2019 at 21:21 DukeZhou 6,249 5 5 gold badges 28 28 silver badges 55 55 bronze badges $\endgroup$ Add a comment | 0 $\begingroup$ My thoughts AI is already indirectly regulated. This is important to acknowledge and this acknowledgement is missing, in my opinion, in the discourse about law and AI. I'm assuming that your question is about law that directly aims at AI technologies and this exemplifies one of the risks of regulating AI: that the law will focus on the technology rather than outcomes. Another concern is that law that is inadequate or outdates quickly creates a false sense of security and this could create a situation which is even more dangerous then if the laws are not there. Law and innovation When it comes to the view that law stifles innovation it is paramount to acknowledge that some regulation can have a very positive effect. There is no general rule that there is a inverse relation between law and innovation. Pacing problem and Collingridge dilemma The following is basically what Wendell Wallach says in an espisode of Future of Life Institute's AI Alignment Podcast entitled Machine Ethics and AI Governance with Wendell Wallach . The pacing problem refers to the fact scientific discovery, and technological innovation, is far outpacing our ability to put in place appropriate ethical legal oversight. Wendell Wallach continues to say that pacing problem converges with what is now called the Collingridge Dilemma, a problem that 'bedevilled' people in technology and governance since 1980, and he defines it the following way: While it was easiest to regulate a technology early in its development, early in its development we have little idea of what its societal impact would be. By the time we did understand what the challenges and the societal impact the technology would be so deeply entrenched in our society that it would be very difficult to change its trajectory. See also: Collingridge dilemma on Wikipedia; and The social control of technology by David Collingridge, published 1980 by Frances Pinter. Share Improve this answer Follow edited Nov 21, 2019 at 22:39 answered Nov 21, 2019 at 3:41 tmaj 256 2 2 silver badges 8 8 bronze badges $\endgroup$ Add a comment | 0 $\begingroup$ The "risks associated with regulating Artificial Intelligence" is that it can limit innovation, be uninformed and non-current, and reduce competitive advantages. These are heavy risks to companies that lose the opportunity to decrease costs, increase productivity, maintain or improve market share etc. Depending on the company size, maturity, and industry positioning; survival might depend on innovation or cost-optimisation. A company may face downsizing or closure because their competitors are out-competing them. The opposing question is what are the risks associated with decreasing regulating Artificial Intelligence. A classical problem is the balancing act between lagging or over-regulation and the speed of technology innovation. There are sufficient recent examples that inform a leaning more towards regulating AI. In the light of potential non-due-diligence or negligence in the rapid deployment of emerging technologies, one could say that the greater risk is that AI may already be beyond human understanding. Share Improve this answer Follow answered 10 hours ago Dr. Umesh Shookan 1 New contributor Dr. Umesh Shookan is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct . $\endgroup$ Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions philosophy social risk-management See similar questions with these tags.
3 $\begingroup$ As part of a research project for college, I would like to understand what many of you astern to be the risks associated with regulating Artificial Intelligence. Such as whether regulation is too risky in regards to limiting progress or too risky in regards to uninformed regulation. philosophy social risk-management Share Improve this question Follow edited Nov 21, 2019 at 3:25 nbro 43.5k 14 14 gold badges 122 122 silver badges 222 222 bronze badges asked Oct 21, 2019 at 3:17 JsAdam 31 1 1 bronze badge $\endgroup$ 2 1 $\begingroup$ I took the link to the survey out of the main text of the question, but I think it would be ok to post in the comments: docs.google.com/forms/d/e/… $\endgroup$ DukeZhou – DukeZhou 2019-10-21 21:21:47 +00:00 Commented Oct 21, 2019 at 21:21 1 $\begingroup$ Here is a link to a survey I put together. Also as part of the project. If you have the time to look at it please do! Thank you! :) docs.google.com/forms/d/… $\endgroup$ JsAdam – JsAdam 2019-10-21 22:25:33 +00:00 Commented Oct 21, 2019 at 22:25 Add a comment |
3 $\begingroup$ As part of a research project for college, I would like to understand what many of you astern to be the risks associated with regulating Artificial Intelligence. Such as whether regulation is too risky in regards to limiting progress or too risky in regards to uninformed regulation. philosophy social risk-management Share Improve this question Follow edited Nov 21, 2019 at 3:25 nbro 43.5k 14 14 gold badges 122 122 silver badges 222 222 bronze badges asked Oct 21, 2019 at 3:17 JsAdam 31 1 1 bronze badge $\endgroup$ 2 1 $\begingroup$ I took the link to the survey out of the main text of the question, but I think it would be ok to post in the comments: docs.google.com/forms/d/e/… $\endgroup$ DukeZhou – DukeZhou 2019-10-21 21:21:47 +00:00 Commented Oct 21, 2019 at 21:21 1 $\begingroup$ Here is a link to a survey I put together. Also as part of the project. If you have the time to look at it please do! Thank you! :) docs.google.com/forms/d/… $\endgroup$ JsAdam – JsAdam 2019-10-21 22:25:33 +00:00 Commented Oct 21, 2019 at 22:25 Add a comment |
$\begingroup$ As part of a research project for college, I would like to understand what many of you astern to be the risks associated with regulating Artificial Intelligence. Such as whether regulation is too risky in regards to limiting progress or too risky in regards to uninformed regulation. philosophy social risk-management Share Improve this question Follow edited Nov 21, 2019 at 3:25 nbro 43.5k 14 14 gold badges 122 122 silver badges 222 222 bronze badges asked Oct 21, 2019 at 3:17 JsAdam 31 1 1 bronze badge $\endgroup$
As part of a research project for college, I would like to understand what many of you astern to be the risks associated with regulating Artificial Intelligence. Such as whether regulation is too risky in regards to limiting progress or too risky in regards to uninformed regulation.
As part of a research project for college, I would like to understand what many of you astern to be the risks associated with regulating Artificial Intelligence. Such as whether regulation is too risky in regards to limiting progress or too risky in regards to uninformed regulation.
philosophy social risk-management
philosophy social risk-management
philosophy social risk-management
Share Improve this question Follow edited Nov 21, 2019 at 3:25 nbro 43.5k 14 14 gold badges 122 122 silver badges 222 222 bronze badges asked Oct 21, 2019 at 3:17 JsAdam 31 1 1 bronze badge
Share Improve this question Follow edited Nov 21, 2019 at 3:25 nbro 43.5k 14 14 gold badges 122 122 silver badges 222 222 bronze badges asked Oct 21, 2019 at 3:17 JsAdam 31 1 1 bronze badge
Share Improve this question Follow
Share Improve this question Follow
Share Improve this question Follow
Improve this question
edited Nov 21, 2019 at 3:25 nbro 43.5k 14 14 gold badges 122 122 silver badges 222 222 bronze badges
edited Nov 21, 2019 at 3:25 nbro 43.5k 14 14 gold badges 122 122 silver badges 222 222 bronze badges
edited Nov 21, 2019 at 3:25
edited Nov 21, 2019 at 3:25
nbro 43.5k 14 14 gold badges 122 122 silver badges 222 222 bronze badges
43.5k 14 14 gold badges 122 122 silver badges 222 222 bronze badges
asked Oct 21, 2019 at 3:17 JsAdam 31 1 1 bronze badge
asked Oct 21, 2019 at 3:17 JsAdam 31 1 1 bronze badge
asked Oct 21, 2019 at 3:17
asked Oct 21, 2019 at 3:17
JsAdam 31 1 1 bronze badge
1 $\begingroup$ I took the link to the survey out of the main text of the question, but I think it would be ok to post in the comments: docs.google.com/forms/d/e/… $\endgroup$ DukeZhou – DukeZhou 2019-10-21 21:21:47 +00:00 Commented Oct 21, 2019 at 21:21 1 $\begingroup$ Here is a link to a survey I put together. Also as part of the project. If you have the time to look at it please do! Thank you! :) docs.google.com/forms/d/… $\endgroup$ JsAdam – JsAdam 2019-10-21 22:25:33 +00:00 Commented Oct 21, 2019 at 22:25 Add a comment |
1 $\begingroup$ I took the link to the survey out of the main text of the question, but I think it would be ok to post in the comments: docs.google.com/forms/d/e/… $\endgroup$ DukeZhou – DukeZhou 2019-10-21 21:21:47 +00:00 Commented Oct 21, 2019 at 21:21 1 $\begingroup$ Here is a link to a survey I put together. Also as part of the project. If you have the time to look at it please do! Thank you! :) docs.google.com/forms/d/… $\endgroup$ JsAdam – JsAdam 2019-10-21 22:25:33 +00:00 Commented Oct 21, 2019 at 22:25
$\begingroup$ I took the link to the survey out of the main text of the question, but I think it would be ok to post in the comments: docs.google.com/forms/d/e/… $\endgroup$ DukeZhou – DukeZhou 2019-10-21 21:21:47 +00:00 Commented Oct 21, 2019 at 21:21
$\begingroup$ I took the link to the survey out of the main text of the question, but I think it would be ok to post in the comments: docs.google.com/forms/d/e/… $\endgroup$ DukeZhou – DukeZhou 2019-10-21 21:21:47 +00:00 Commented Oct 21, 2019 at 21:21
2019-10-21 21:21:47 +00:00
$\begingroup$ Here is a link to a survey I put together. Also as part of the project. If you have the time to look at it please do! Thank you! :) docs.google.com/forms/d/… $\endgroup$ JsAdam – JsAdam 2019-10-21 22:25:33 +00:00 Commented Oct 21, 2019 at 22:25
$\begingroup$ Here is a link to a survey I put together. Also as part of the project. If you have the time to look at it please do! Thank you! :) docs.google.com/forms/d/… $\endgroup$ JsAdam – JsAdam 2019-10-21 22:25:33 +00:00 Commented Oct 21, 2019 at 22:25
2019-10-21 22:25:33 +00:00
5 Answers 5 Sorted by: Reset to default Highest score (default) Date modified (newest first) Date created (oldest first) 1 $\begingroup$ I don't think regulating something necessarily causes that regulation to defacto become a "risk". Regulation - including overregulation - may, in fact, aid in the dialogue between practitioners, which may end up educating the regulators, the public and the practitioners themselves. My answers to your survey would most likely be "it depends...", or "no risk", which isn't to say it's not an impediment, but just not a "risk", per se. Share Improve this answer Follow answered Nov 21, 2019 at 22:51 Mike S. 11 1 1 bronze badge $\endgroup$ Add a comment | 0 $\begingroup$ Risks of regulation? As you mention in your survey, it is generally understood that the primary concern with regulating AI research is that other parties risk falling behind. Should we regulate it? Can it be done? You can't really "regulate" technological development in the same way you can regulate some other things in general. Asides from the fact that there is no global governance that can implement this regulation on nations, you can't really regulate someone's research more than you can control how people think: you just need a pen / paper / computer to do any research in math/AI. The NSA tried to regulate encryption citing national security reasons during a saga known as the Crypto Wars . They failed. What is AI anyways? How will we get there? What will it be like? Honestly, from the phrasing of your questions in your survey, I get the impression that you don't really understand the hypothetical existential risk due to AI. Personally I don't really buy into their thesis, but in any case, if such a super-intelligent agent emerges, the problem isn't so much "oh no my city is destroyed" or "oh no so many people are killed", but more so "all of humanity is enslaved without being aware" or "everything is dead". We think this might happen because we assume AI is all-powerful and we project our own negative qualities onto this unknown agent with unknown power. It's mostly fear really. This is all speculation, and by definition you cannot predict the behavior of an agent smarter than you, so literally every single comment on this topic is purely unbased speculation. The only thing that is true is that we don't know. There is another aspect of AI which is dangerous, which more so concerns with how humans use it: i.e. facial recognition, automated weapon systems, automated hacking. These are more pressing issues. What should we do? We are forced to research AI because no party can afford to fall behind, but at the same time we are pushing ourselves towards a dangerous future: it's a catch-22.... Consensus and current practice suggests that every researcher publicizes our results. Compared to other areas of academia, whose research is often locked behind paywall, ML/AI research is quite publicly accessible. Of course, this doesn't prevent the possibility of a rouge agent.... Share Improve this answer Follow edited Oct 21, 2019 at 23:17 answered Oct 21, 2019 at 20:57 k.c. sayz 'k.c sayz' 2,131 13 13 silver badges 27 27 bronze badges $\endgroup$ Add a comment | 0 $\begingroup$ I think there is a very strong argument for regulating AI. Chiefly, unintentional (or intentional) bias in statistically driven algorithms, and the idea that responsibility can be offloaded to processes that cannot be meaningfully punished where they transgress. Additionally, the history of technology, especially since the industrial revolution, strongly validates neo-luddism in the sense that the problems arising from implementation of new technology are not always predictable. In this sense, there are both ethical reasons to consider regulation, and minimax reasons (here in the sense of erring on the side of caution to minimize the maximum potential downside.) Risk of falling behind A risk is that not all participants will hew to the regulations, giving those who don't a significant advantage, but, that, in and of itself, is not a reason to forgo sensible regulation. However, this is not a justification to forgo regulation in that that penalties at least serve as potential deterrent. Opportunity cost Not a risk, but a driver. The idea of "leaving money on the table" in that not implementing a given technology forgoes greater utility, sacrificing potential benefit. This is not invalid, but shouldn't ignore hidden costs. For instance, the wide-scale deployment of even primitive bots has had a profound social impact. Share Improve this answer Follow edited Oct 22, 2019 at 1:09 answered Oct 21, 2019 at 21:21 DukeZhou 6,249 5 5 gold badges 28 28 silver badges 55 55 bronze badges $\endgroup$ Add a comment | 0 $\begingroup$ My thoughts AI is already indirectly regulated. This is important to acknowledge and this acknowledgement is missing, in my opinion, in the discourse about law and AI. I'm assuming that your question is about law that directly aims at AI technologies and this exemplifies one of the risks of regulating AI: that the law will focus on the technology rather than outcomes. Another concern is that law that is inadequate or outdates quickly creates a false sense of security and this could create a situation which is even more dangerous then if the laws are not there. Law and innovation When it comes to the view that law stifles innovation it is paramount to acknowledge that some regulation can have a very positive effect. There is no general rule that there is a inverse relation between law and innovation. Pacing problem and Collingridge dilemma The following is basically what Wendell Wallach says in an espisode of Future of Life Institute's AI Alignment Podcast entitled Machine Ethics and AI Governance with Wendell Wallach . The pacing problem refers to the fact scientific discovery, and technological innovation, is far outpacing our ability to put in place appropriate ethical legal oversight. Wendell Wallach continues to say that pacing problem converges with what is now called the Collingridge Dilemma, a problem that 'bedevilled' people in technology and governance since 1980, and he defines it the following way: While it was easiest to regulate a technology early in its development, early in its development we have little idea of what its societal impact would be. By the time we did understand what the challenges and the societal impact the technology would be so deeply entrenched in our society that it would be very difficult to change its trajectory. See also: Collingridge dilemma on Wikipedia; and The social control of technology by David Collingridge, published 1980 by Frances Pinter. Share Improve this answer Follow edited Nov 21, 2019 at 22:39 answered Nov 21, 2019 at 3:41 tmaj 256 2 2 silver badges 8 8 bronze badges $\endgroup$ Add a comment | 0 $\begingroup$ The "risks associated with regulating Artificial Intelligence" is that it can limit innovation, be uninformed and non-current, and reduce competitive advantages. These are heavy risks to companies that lose the opportunity to decrease costs, increase productivity, maintain or improve market share etc. Depending on the company size, maturity, and industry positioning; survival might depend on innovation or cost-optimisation. A company may face downsizing or closure because their competitors are out-competing them. The opposing question is what are the risks associated with decreasing regulating Artificial Intelligence. A classical problem is the balancing act between lagging or over-regulation and the speed of technology innovation. There are sufficient recent examples that inform a leaning more towards regulating AI. In the light of potential non-due-diligence or negligence in the rapid deployment of emerging technologies, one could say that the greater risk is that AI may already be beyond human understanding. Share Improve this answer Follow answered 10 hours ago Dr. Umesh Shookan 1 New contributor Dr. Umesh Shookan is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct . $\endgroup$ Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions philosophy social risk-management See similar questions with these tags.
5 Answers 5 Sorted by: Reset to default Highest score (default) Date modified (newest first) Date created (oldest first)
5 Answers 5 Sorted by: Reset to default Highest score (default) Date modified (newest first) Date created (oldest first)
Sorted by: Reset to default Highest score (default) Date modified (newest first) Date created (oldest first)
Sorted by: Reset to default Highest score (default) Date modified (newest first) Date created (oldest first)
Sorted by: Reset to default
Highest score (default) Date modified (newest first) Date created (oldest first)
1 $\begingroup$ I don't think regulating something necessarily causes that regulation to defacto become a "risk". Regulation - including overregulation - may, in fact, aid in the dialogue between practitioners, which may end up educating the regulators, the public and the practitioners themselves. My answers to your survey would most likely be "it depends...", or "no risk", which isn't to say it's not an impediment, but just not a "risk", per se. Share Improve this answer Follow answered Nov 21, 2019 at 22:51 Mike S. 11 1 1 bronze badge $\endgroup$ Add a comment |
1 $\begingroup$ I don't think regulating something necessarily causes that regulation to defacto become a "risk". Regulation - including overregulation - may, in fact, aid in the dialogue between practitioners, which may end up educating the regulators, the public and the practitioners themselves. My answers to your survey would most likely be "it depends...", or "no risk", which isn't to say it's not an impediment, but just not a "risk", per se. Share Improve this answer Follow answered Nov 21, 2019 at 22:51 Mike S. 11 1 1 bronze badge $\endgroup$ Add a comment |
$\begingroup$ I don't think regulating something necessarily causes that regulation to defacto become a "risk". Regulation - including overregulation - may, in fact, aid in the dialogue between practitioners, which may end up educating the regulators, the public and the practitioners themselves. My answers to your survey would most likely be "it depends...", or "no risk", which isn't to say it's not an impediment, but just not a "risk", per se. Share Improve this answer Follow answered Nov 21, 2019 at 22:51 Mike S. 11 1 1 bronze badge $\endgroup$
I don't think regulating something necessarily causes that regulation to defacto become a "risk". Regulation - including overregulation - may, in fact, aid in the dialogue between practitioners, which may end up educating the regulators, the public and the practitioners themselves. My answers to your survey would most likely be "it depends...", or "no risk", which isn't to say it's not an impediment, but just not a "risk", per se.
I don't think regulating something necessarily causes that regulation to defacto become a "risk".
Regulation - including overregulation - may, in fact, aid in the dialogue between practitioners, which may end up educating the regulators, the public and the practitioners themselves.
My answers to your survey would most likely be "it depends...", or "no risk", which isn't to say it's not an impediment, but just not a "risk", per se.
Share Improve this answer Follow answered Nov 21, 2019 at 22:51 Mike S. 11 1 1 bronze badge
Share Improve this answer Follow answered Nov 21, 2019 at 22:51 Mike S. 11 1 1 bronze badge
Share Improve this answer Follow
Share Improve this answer Follow
Share Improve this answer Follow
answered Nov 21, 2019 at 22:51 Mike S. 11 1 1 bronze badge
answered Nov 21, 2019 at 22:51 Mike S. 11 1 1 bronze badge
answered Nov 21, 2019 at 22:51
answered Nov 21, 2019 at 22:51
Mike S. 11 1 1 bronze badge
0 $\begingroup$ Risks of regulation? As you mention in your survey, it is generally understood that the primary concern with regulating AI research is that other parties risk falling behind. Should we regulate it? Can it be done? You can't really "regulate" technological development in the same way you can regulate some other things in general. Asides from the fact that there is no global governance that can implement this regulation on nations, you can't really regulate someone's research more than you can control how people think: you just need a pen / paper / computer to do any research in math/AI. The NSA tried to regulate encryption citing national security reasons during a saga known as the Crypto Wars . They failed. What is AI anyways? How will we get there? What will it be like? Honestly, from the phrasing of your questions in your survey, I get the impression that you don't really understand the hypothetical existential risk due to AI. Personally I don't really buy into their thesis, but in any case, if such a super-intelligent agent emerges, the problem isn't so much "oh no my city is destroyed" or "oh no so many people are killed", but more so "all of humanity is enslaved without being aware" or "everything is dead". We think this might happen because we assume AI is all-powerful and we project our own negative qualities onto this unknown agent with unknown power. It's mostly fear really. This is all speculation, and by definition you cannot predict the behavior of an agent smarter than you, so literally every single comment on this topic is purely unbased speculation. The only thing that is true is that we don't know. There is another aspect of AI which is dangerous, which more so concerns with how humans use it: i.e. facial recognition, automated weapon systems, automated hacking. These are more pressing issues. What should we do? We are forced to research AI because no party can afford to fall behind, but at the same time we are pushing ourselves towards a dangerous future: it's a catch-22.... Consensus and current practice suggests that every researcher publicizes our results. Compared to other areas of academia, whose research is often locked behind paywall, ML/AI research is quite publicly accessible. Of course, this doesn't prevent the possibility of a rouge agent.... Share Improve this answer Follow edited Oct 21, 2019 at 23:17 answered Oct 21, 2019 at 20:57 k.c. sayz 'k.c sayz' 2,131 13 13 silver badges 27 27 bronze badges $\endgroup$ Add a comment |
0 $\begingroup$ Risks of regulation? As you mention in your survey, it is generally understood that the primary concern with regulating AI research is that other parties risk falling behind. Should we regulate it? Can it be done? You can't really "regulate" technological development in the same way you can regulate some other things in general. Asides from the fact that there is no global governance that can implement this regulation on nations, you can't really regulate someone's research more than you can control how people think: you just need a pen / paper / computer to do any research in math/AI. The NSA tried to regulate encryption citing national security reasons during a saga known as the Crypto Wars . They failed. What is AI anyways? How will we get there? What will it be like? Honestly, from the phrasing of your questions in your survey, I get the impression that you don't really understand the hypothetical existential risk due to AI. Personally I don't really buy into their thesis, but in any case, if such a super-intelligent agent emerges, the problem isn't so much "oh no my city is destroyed" or "oh no so many people are killed", but more so "all of humanity is enslaved without being aware" or "everything is dead". We think this might happen because we assume AI is all-powerful and we project our own negative qualities onto this unknown agent with unknown power. It's mostly fear really. This is all speculation, and by definition you cannot predict the behavior of an agent smarter than you, so literally every single comment on this topic is purely unbased speculation. The only thing that is true is that we don't know. There is another aspect of AI which is dangerous, which more so concerns with how humans use it: i.e. facial recognition, automated weapon systems, automated hacking. These are more pressing issues. What should we do? We are forced to research AI because no party can afford to fall behind, but at the same time we are pushing ourselves towards a dangerous future: it's a catch-22.... Consensus and current practice suggests that every researcher publicizes our results. Compared to other areas of academia, whose research is often locked behind paywall, ML/AI research is quite publicly accessible. Of course, this doesn't prevent the possibility of a rouge agent.... Share Improve this answer Follow edited Oct 21, 2019 at 23:17 answered Oct 21, 2019 at 20:57 k.c. sayz 'k.c sayz' 2,131 13 13 silver badges 27 27 bronze badges $\endgroup$ Add a comment |
$\begingroup$ Risks of regulation? As you mention in your survey, it is generally understood that the primary concern with regulating AI research is that other parties risk falling behind. Should we regulate it? Can it be done? You can't really "regulate" technological development in the same way you can regulate some other things in general. Asides from the fact that there is no global governance that can implement this regulation on nations, you can't really regulate someone's research more than you can control how people think: you just need a pen / paper / computer to do any research in math/AI. The NSA tried to regulate encryption citing national security reasons during a saga known as the Crypto Wars . They failed. What is AI anyways? How will we get there? What will it be like? Honestly, from the phrasing of your questions in your survey, I get the impression that you don't really understand the hypothetical existential risk due to AI. Personally I don't really buy into their thesis, but in any case, if such a super-intelligent agent emerges, the problem isn't so much "oh no my city is destroyed" or "oh no so many people are killed", but more so "all of humanity is enslaved without being aware" or "everything is dead". We think this might happen because we assume AI is all-powerful and we project our own negative qualities onto this unknown agent with unknown power. It's mostly fear really. This is all speculation, and by definition you cannot predict the behavior of an agent smarter than you, so literally every single comment on this topic is purely unbased speculation. The only thing that is true is that we don't know. There is another aspect of AI which is dangerous, which more so concerns with how humans use it: i.e. facial recognition, automated weapon systems, automated hacking. These are more pressing issues. What should we do? We are forced to research AI because no party can afford to fall behind, but at the same time we are pushing ourselves towards a dangerous future: it's a catch-22.... Consensus and current practice suggests that every researcher publicizes our results. Compared to other areas of academia, whose research is often locked behind paywall, ML/AI research is quite publicly accessible. Of course, this doesn't prevent the possibility of a rouge agent.... Share Improve this answer Follow edited Oct 21, 2019 at 23:17 answered Oct 21, 2019 at 20:57 k.c. sayz 'k.c sayz' 2,131 13 13 silver badges 27 27 bronze badges $\endgroup$
Risks of regulation? As you mention in your survey, it is generally understood that the primary concern with regulating AI research is that other parties risk falling behind. Should we regulate it? Can it be done? You can't really "regulate" technological development in the same way you can regulate some other things in general. Asides from the fact that there is no global governance that can implement this regulation on nations, you can't really regulate someone's research more than you can control how people think: you just need a pen / paper / computer to do any research in math/AI. The NSA tried to regulate encryption citing national security reasons during a saga known as the Crypto Wars . They failed. What is AI anyways? How will we get there? What will it be like? Honestly, from the phrasing of your questions in your survey, I get the impression that you don't really understand the hypothetical existential risk due to AI. Personally I don't really buy into their thesis, but in any case, if such a super-intelligent agent emerges, the problem isn't so much "oh no my city is destroyed" or "oh no so many people are killed", but more so "all of humanity is enslaved without being aware" or "everything is dead". We think this might happen because we assume AI is all-powerful and we project our own negative qualities onto this unknown agent with unknown power. It's mostly fear really. This is all speculation, and by definition you cannot predict the behavior of an agent smarter than you, so literally every single comment on this topic is purely unbased speculation. The only thing that is true is that we don't know. There is another aspect of AI which is dangerous, which more so concerns with how humans use it: i.e. facial recognition, automated weapon systems, automated hacking. These are more pressing issues. What should we do? We are forced to research AI because no party can afford to fall behind, but at the same time we are pushing ourselves towards a dangerous future: it's a catch-22.... Consensus and current practice suggests that every researcher publicizes our results. Compared to other areas of academia, whose research is often locked behind paywall, ML/AI research is quite publicly accessible. Of course, this doesn't prevent the possibility of a rouge agent....
As you mention in your survey, it is generally understood that the primary concern with regulating AI research is that other parties risk falling behind.
Should we regulate it? Can it be done?
You can't really "regulate" technological development in the same way you can regulate some other things in general. Asides from the fact that there is no global governance that can implement this regulation on nations, you can't really regulate someone's research more than you can control how people think: you just need a pen / paper / computer to do any research in math/AI.
The NSA tried to regulate encryption citing national security reasons during a saga known as the Crypto Wars . They failed.
What is AI anyways? How will we get there? What will it be like?
Honestly, from the phrasing of your questions in your survey, I get the impression that you don't really understand the hypothetical existential risk due to AI. Personally I don't really buy into their thesis, but in any case, if such a super-intelligent agent emerges, the problem isn't so much "oh no my city is destroyed" or "oh no so many people are killed", but more so "all of humanity is enslaved without being aware" or "everything is dead". We think this might happen because we assume AI is all-powerful and we project our own negative qualities onto this unknown agent with unknown power. It's mostly fear really.
This is all speculation, and by definition you cannot predict the behavior of an agent smarter than you, so literally every single comment on this topic is purely unbased speculation. The only thing that is true is that we don't know.
There is another aspect of AI which is dangerous, which more so concerns with how humans use it: i.e. facial recognition, automated weapon systems, automated hacking. These are more pressing issues.
What should we do? We are forced to research AI because no party can afford to fall behind, but at the same time we are pushing ourselves towards a dangerous future: it's a catch-22....
Consensus and current practice suggests that every researcher publicizes our results. Compared to other areas of academia, whose research is often locked behind paywall, ML/AI research is quite publicly accessible. Of course, this doesn't prevent the possibility of a rouge agent....
Share Improve this answer Follow edited Oct 21, 2019 at 23:17 answered Oct 21, 2019 at 20:57 k.c. sayz 'k.c sayz' 2,131 13 13 silver badges 27 27 bronze badges
Share Improve this answer Follow edited Oct 21, 2019 at 23:17 answered Oct 21, 2019 at 20:57 k.c. sayz 'k.c sayz' 2,131 13 13 silver badges 27 27 bronze badges
Share Improve this answer Follow
Share Improve this answer Follow
Share Improve this answer Follow
edited Oct 21, 2019 at 23:17
edited Oct 21, 2019 at 23:17
edited Oct 21, 2019 at 23:17
edited Oct 21, 2019 at 23:17
answered Oct 21, 2019 at 20:57 k.c. sayz 'k.c sayz' 2,131 13 13 silver badges 27 27 bronze badges
answered Oct 21, 2019 at 20:57 k.c. sayz 'k.c sayz' 2,131 13 13 silver badges 27 27 bronze badges
answered Oct 21, 2019 at 20:57
answered Oct 21, 2019 at 20:57
k.c. sayz 'k.c sayz' 2,131 13 13 silver badges 27 27 bronze badges
2,131 13 13 silver badges 27 27 bronze badges
0 $\begingroup$ I think there is a very strong argument for regulating AI. Chiefly, unintentional (or intentional) bias in statistically driven algorithms, and the idea that responsibility can be offloaded to processes that cannot be meaningfully punished where they transgress. Additionally, the history of technology, especially since the industrial revolution, strongly validates neo-luddism in the sense that the problems arising from implementation of new technology are not always predictable. In this sense, there are both ethical reasons to consider regulation, and minimax reasons (here in the sense of erring on the side of caution to minimize the maximum potential downside.) Risk of falling behind A risk is that not all participants will hew to the regulations, giving those who don't a significant advantage, but, that, in and of itself, is not a reason to forgo sensible regulation. However, this is not a justification to forgo regulation in that that penalties at least serve as potential deterrent. Opportunity cost Not a risk, but a driver. The idea of "leaving money on the table" in that not implementing a given technology forgoes greater utility, sacrificing potential benefit. This is not invalid, but shouldn't ignore hidden costs. For instance, the wide-scale deployment of even primitive bots has had a profound social impact. Share Improve this answer Follow edited Oct 22, 2019 at 1:09 answered Oct 21, 2019 at 21:21 DukeZhou 6,249 5 5 gold badges 28 28 silver badges 55 55 bronze badges $\endgroup$ Add a comment |
0 $\begingroup$ I think there is a very strong argument for regulating AI. Chiefly, unintentional (or intentional) bias in statistically driven algorithms, and the idea that responsibility can be offloaded to processes that cannot be meaningfully punished where they transgress. Additionally, the history of technology, especially since the industrial revolution, strongly validates neo-luddism in the sense that the problems arising from implementation of new technology are not always predictable. In this sense, there are both ethical reasons to consider regulation, and minimax reasons (here in the sense of erring on the side of caution to minimize the maximum potential downside.) Risk of falling behind A risk is that not all participants will hew to the regulations, giving those who don't a significant advantage, but, that, in and of itself, is not a reason to forgo sensible regulation. However, this is not a justification to forgo regulation in that that penalties at least serve as potential deterrent. Opportunity cost Not a risk, but a driver. The idea of "leaving money on the table" in that not implementing a given technology forgoes greater utility, sacrificing potential benefit. This is not invalid, but shouldn't ignore hidden costs. For instance, the wide-scale deployment of even primitive bots has had a profound social impact. Share Improve this answer Follow edited Oct 22, 2019 at 1:09 answered Oct 21, 2019 at 21:21 DukeZhou 6,249 5 5 gold badges 28 28 silver badges 55 55 bronze badges $\endgroup$ Add a comment |
$\begingroup$ I think there is a very strong argument for regulating AI. Chiefly, unintentional (or intentional) bias in statistically driven algorithms, and the idea that responsibility can be offloaded to processes that cannot be meaningfully punished where they transgress. Additionally, the history of technology, especially since the industrial revolution, strongly validates neo-luddism in the sense that the problems arising from implementation of new technology are not always predictable. In this sense, there are both ethical reasons to consider regulation, and minimax reasons (here in the sense of erring on the side of caution to minimize the maximum potential downside.) Risk of falling behind A risk is that not all participants will hew to the regulations, giving those who don't a significant advantage, but, that, in and of itself, is not a reason to forgo sensible regulation. However, this is not a justification to forgo regulation in that that penalties at least serve as potential deterrent. Opportunity cost Not a risk, but a driver. The idea of "leaving money on the table" in that not implementing a given technology forgoes greater utility, sacrificing potential benefit. This is not invalid, but shouldn't ignore hidden costs. For instance, the wide-scale deployment of even primitive bots has had a profound social impact. Share Improve this answer Follow edited Oct 22, 2019 at 1:09 answered Oct 21, 2019 at 21:21 DukeZhou 6,249 5 5 gold badges 28 28 silver badges 55 55 bronze badges $\endgroup$
I think there is a very strong argument for regulating AI. Chiefly, unintentional (or intentional) bias in statistically driven algorithms, and the idea that responsibility can be offloaded to processes that cannot be meaningfully punished where they transgress. Additionally, the history of technology, especially since the industrial revolution, strongly validates neo-luddism in the sense that the problems arising from implementation of new technology are not always predictable. In this sense, there are both ethical reasons to consider regulation, and minimax reasons (here in the sense of erring on the side of caution to minimize the maximum potential downside.) Risk of falling behind A risk is that not all participants will hew to the regulations, giving those who don't a significant advantage, but, that, in and of itself, is not a reason to forgo sensible regulation. However, this is not a justification to forgo regulation in that that penalties at least serve as potential deterrent. Opportunity cost Not a risk, but a driver. The idea of "leaving money on the table" in that not implementing a given technology forgoes greater utility, sacrificing potential benefit. This is not invalid, but shouldn't ignore hidden costs. For instance, the wide-scale deployment of even primitive bots has had a profound social impact.
I think there is a very strong argument for regulating AI. Chiefly, unintentional (or intentional) bias in statistically driven algorithms, and the idea that responsibility can be offloaded to processes that cannot be meaningfully punished where they transgress. Additionally, the history of technology, especially since the industrial revolution, strongly validates neo-luddism in the sense that the problems arising from implementation of new technology are not always predictable.
In this sense, there are both ethical reasons to consider regulation, and minimax reasons (here in the sense of erring on the side of caution to minimize the maximum potential downside.)
A risk is that not all participants will hew to the regulations, giving those who don't a significant advantage, but, that, in and of itself, is not a reason to forgo sensible regulation.
However, this is not a justification to forgo regulation in that that penalties at least serve as potential deterrent.
Not a risk, but a driver. The idea of "leaving money on the table" in that not implementing a given technology forgoes greater utility, sacrificing potential benefit.
This is not invalid, but shouldn't ignore hidden costs. For instance, the wide-scale deployment of even primitive bots has had a profound social impact.
Share Improve this answer Follow edited Oct 22, 2019 at 1:09 answered Oct 21, 2019 at 21:21 DukeZhou 6,249 5 5 gold badges 28 28 silver badges 55 55 bronze badges
Share Improve this answer Follow edited Oct 22, 2019 at 1:09 answered Oct 21, 2019 at 21:21 DukeZhou 6,249 5 5 gold badges 28 28 silver badges 55 55 bronze badges
Share Improve this answer Follow
Share Improve this answer Follow
Share Improve this answer Follow
edited Oct 22, 2019 at 1:09
edited Oct 22, 2019 at 1:09
edited Oct 22, 2019 at 1:09
edited Oct 22, 2019 at 1:09
answered Oct 21, 2019 at 21:21 DukeZhou 6,249 5 5 gold badges 28 28 silver badges 55 55 bronze badges
answered Oct 21, 2019 at 21:21 DukeZhou 6,249 5 5 gold badges 28 28 silver badges 55 55 bronze badges
answered Oct 21, 2019 at 21:21
answered Oct 21, 2019 at 21:21
DukeZhou 6,249 5 5 gold badges 28 28 silver badges 55 55 bronze badges
6,249 5 5 gold badges 28 28 silver badges 55 55 bronze badges
0 $\begingroup$ My thoughts AI is already indirectly regulated. This is important to acknowledge and this acknowledgement is missing, in my opinion, in the discourse about law and AI. I'm assuming that your question is about law that directly aims at AI technologies and this exemplifies one of the risks of regulating AI: that the law will focus on the technology rather than outcomes. Another concern is that law that is inadequate or outdates quickly creates a false sense of security and this could create a situation which is even more dangerous then if the laws are not there. Law and innovation When it comes to the view that law stifles innovation it is paramount to acknowledge that some regulation can have a very positive effect. There is no general rule that there is a inverse relation between law and innovation. Pacing problem and Collingridge dilemma The following is basically what Wendell Wallach says in an espisode of Future of Life Institute's AI Alignment Podcast entitled Machine Ethics and AI Governance with Wendell Wallach . The pacing problem refers to the fact scientific discovery, and technological innovation, is far outpacing our ability to put in place appropriate ethical legal oversight. Wendell Wallach continues to say that pacing problem converges with what is now called the Collingridge Dilemma, a problem that 'bedevilled' people in technology and governance since 1980, and he defines it the following way: While it was easiest to regulate a technology early in its development, early in its development we have little idea of what its societal impact would be. By the time we did understand what the challenges and the societal impact the technology would be so deeply entrenched in our society that it would be very difficult to change its trajectory. See also: Collingridge dilemma on Wikipedia; and The social control of technology by David Collingridge, published 1980 by Frances Pinter. Share Improve this answer Follow edited Nov 21, 2019 at 22:39 answered Nov 21, 2019 at 3:41 tmaj 256 2 2 silver badges 8 8 bronze badges $\endgroup$ Add a comment |
0 $\begingroup$ My thoughts AI is already indirectly regulated. This is important to acknowledge and this acknowledgement is missing, in my opinion, in the discourse about law and AI. I'm assuming that your question is about law that directly aims at AI technologies and this exemplifies one of the risks of regulating AI: that the law will focus on the technology rather than outcomes. Another concern is that law that is inadequate or outdates quickly creates a false sense of security and this could create a situation which is even more dangerous then if the laws are not there. Law and innovation When it comes to the view that law stifles innovation it is paramount to acknowledge that some regulation can have a very positive effect. There is no general rule that there is a inverse relation between law and innovation. Pacing problem and Collingridge dilemma The following is basically what Wendell Wallach says in an espisode of Future of Life Institute's AI Alignment Podcast entitled Machine Ethics and AI Governance with Wendell Wallach . The pacing problem refers to the fact scientific discovery, and technological innovation, is far outpacing our ability to put in place appropriate ethical legal oversight. Wendell Wallach continues to say that pacing problem converges with what is now called the Collingridge Dilemma, a problem that 'bedevilled' people in technology and governance since 1980, and he defines it the following way: While it was easiest to regulate a technology early in its development, early in its development we have little idea of what its societal impact would be. By the time we did understand what the challenges and the societal impact the technology would be so deeply entrenched in our society that it would be very difficult to change its trajectory. See also: Collingridge dilemma on Wikipedia; and The social control of technology by David Collingridge, published 1980 by Frances Pinter. Share Improve this answer Follow edited Nov 21, 2019 at 22:39 answered Nov 21, 2019 at 3:41 tmaj 256 2 2 silver badges 8 8 bronze badges $\endgroup$ Add a comment |
$\begingroup$ My thoughts AI is already indirectly regulated. This is important to acknowledge and this acknowledgement is missing, in my opinion, in the discourse about law and AI. I'm assuming that your question is about law that directly aims at AI technologies and this exemplifies one of the risks of regulating AI: that the law will focus on the technology rather than outcomes. Another concern is that law that is inadequate or outdates quickly creates a false sense of security and this could create a situation which is even more dangerous then if the laws are not there. Law and innovation When it comes to the view that law stifles innovation it is paramount to acknowledge that some regulation can have a very positive effect. There is no general rule that there is a inverse relation between law and innovation. Pacing problem and Collingridge dilemma The following is basically what Wendell Wallach says in an espisode of Future of Life Institute's AI Alignment Podcast entitled Machine Ethics and AI Governance with Wendell Wallach . The pacing problem refers to the fact scientific discovery, and technological innovation, is far outpacing our ability to put in place appropriate ethical legal oversight. Wendell Wallach continues to say that pacing problem converges with what is now called the Collingridge Dilemma, a problem that 'bedevilled' people in technology and governance since 1980, and he defines it the following way: While it was easiest to regulate a technology early in its development, early in its development we have little idea of what its societal impact would be. By the time we did understand what the challenges and the societal impact the technology would be so deeply entrenched in our society that it would be very difficult to change its trajectory. See also: Collingridge dilemma on Wikipedia; and The social control of technology by David Collingridge, published 1980 by Frances Pinter. Share Improve this answer Follow edited Nov 21, 2019 at 22:39 answered Nov 21, 2019 at 3:41 tmaj 256 2 2 silver badges 8 8 bronze badges $\endgroup$
My thoughts AI is already indirectly regulated. This is important to acknowledge and this acknowledgement is missing, in my opinion, in the discourse about law and AI. I'm assuming that your question is about law that directly aims at AI technologies and this exemplifies one of the risks of regulating AI: that the law will focus on the technology rather than outcomes. Another concern is that law that is inadequate or outdates quickly creates a false sense of security and this could create a situation which is even more dangerous then if the laws are not there. Law and innovation When it comes to the view that law stifles innovation it is paramount to acknowledge that some regulation can have a very positive effect. There is no general rule that there is a inverse relation between law and innovation. Pacing problem and Collingridge dilemma The following is basically what Wendell Wallach says in an espisode of Future of Life Institute's AI Alignment Podcast entitled Machine Ethics and AI Governance with Wendell Wallach . The pacing problem refers to the fact scientific discovery, and technological innovation, is far outpacing our ability to put in place appropriate ethical legal oversight. Wendell Wallach continues to say that pacing problem converges with what is now called the Collingridge Dilemma, a problem that 'bedevilled' people in technology and governance since 1980, and he defines it the following way: While it was easiest to regulate a technology early in its development, early in its development we have little idea of what its societal impact would be. By the time we did understand what the challenges and the societal impact the technology would be so deeply entrenched in our society that it would be very difficult to change its trajectory. See also: Collingridge dilemma on Wikipedia; and The social control of technology by David Collingridge, published 1980 by Frances Pinter.
AI is already indirectly regulated. This is important to acknowledge and this acknowledgement is missing, in my opinion, in the discourse about law and AI.
I'm assuming that your question is about law that directly aims at AI technologies and this exemplifies one of the risks of regulating AI: that the law will focus on the technology rather than outcomes.
Another concern is that law that is inadequate or outdates quickly creates a false sense of security and this could create a situation which is even more dangerous then if the laws are not there.
When it comes to the view that law stifles innovation it is paramount to acknowledge that some regulation can have a very positive effect. There is no general rule that there is a inverse relation between law and innovation.
The following is basically what Wendell Wallach says in an espisode of Future of Life Institute's AI Alignment Podcast entitled Machine Ethics and AI Governance with Wendell Wallach .
The pacing problem refers to the fact scientific discovery, and technological innovation, is far outpacing our ability to put in place appropriate ethical legal oversight.
Wendell Wallach continues to say that pacing problem converges with what is now called the Collingridge Dilemma, a problem that 'bedevilled' people in technology and governance since 1980, and he defines it the following way:
While it was easiest to regulate a technology early in its development, early in its development we have little idea of what its societal impact would be. By the time we did understand what the challenges and the societal impact the technology would be so deeply entrenched in our society that it would be very difficult to change its trajectory.
Share Improve this answer Follow edited Nov 21, 2019 at 22:39 answered Nov 21, 2019 at 3:41 tmaj 256 2 2 silver badges 8 8 bronze badges
Share Improve this answer Follow edited Nov 21, 2019 at 22:39 answered Nov 21, 2019 at 3:41 tmaj 256 2 2 silver badges 8 8 bronze badges
Share Improve this answer Follow
Share Improve this answer Follow
Share Improve this answer Follow
edited Nov 21, 2019 at 22:39
edited Nov 21, 2019 at 22:39
edited Nov 21, 2019 at 22:39
edited Nov 21, 2019 at 22:39
answered Nov 21, 2019 at 3:41 tmaj 256 2 2 silver badges 8 8 bronze badges
answered Nov 21, 2019 at 3:41 tmaj 256 2 2 silver badges 8 8 bronze badges
answered Nov 21, 2019 at 3:41
answered Nov 21, 2019 at 3:41
tmaj 256 2 2 silver badges 8 8 bronze badges
256 2 2 silver badges 8 8 bronze badges
0 $\begingroup$ The "risks associated with regulating Artificial Intelligence" is that it can limit innovation, be uninformed and non-current, and reduce competitive advantages. These are heavy risks to companies that lose the opportunity to decrease costs, increase productivity, maintain or improve market share etc. Depending on the company size, maturity, and industry positioning; survival might depend on innovation or cost-optimisation. A company may face downsizing or closure because their competitors are out-competing them. The opposing question is what are the risks associated with decreasing regulating Artificial Intelligence. A classical problem is the balancing act between lagging or over-regulation and the speed of technology innovation. There are sufficient recent examples that inform a leaning more towards regulating AI. In the light of potential non-due-diligence or negligence in the rapid deployment of emerging technologies, one could say that the greater risk is that AI may already be beyond human understanding. Share Improve this answer Follow answered 10 hours ago Dr. Umesh Shookan 1 New contributor Dr. Umesh Shookan is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct . $\endgroup$ Add a comment |
0 $\begingroup$ The "risks associated with regulating Artificial Intelligence" is that it can limit innovation, be uninformed and non-current, and reduce competitive advantages. These are heavy risks to companies that lose the opportunity to decrease costs, increase productivity, maintain or improve market share etc. Depending on the company size, maturity, and industry positioning; survival might depend on innovation or cost-optimisation. A company may face downsizing or closure because their competitors are out-competing them. The opposing question is what are the risks associated with decreasing regulating Artificial Intelligence. A classical problem is the balancing act between lagging or over-regulation and the speed of technology innovation. There are sufficient recent examples that inform a leaning more towards regulating AI. In the light of potential non-due-diligence or negligence in the rapid deployment of emerging technologies, one could say that the greater risk is that AI may already be beyond human understanding. Share Improve this answer Follow answered 10 hours ago Dr. Umesh Shookan 1 New contributor Dr. Umesh Shookan is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct . $\endgroup$ Add a comment |
$\begingroup$ The "risks associated with regulating Artificial Intelligence" is that it can limit innovation, be uninformed and non-current, and reduce competitive advantages. These are heavy risks to companies that lose the opportunity to decrease costs, increase productivity, maintain or improve market share etc. Depending on the company size, maturity, and industry positioning; survival might depend on innovation or cost-optimisation. A company may face downsizing or closure because their competitors are out-competing them. The opposing question is what are the risks associated with decreasing regulating Artificial Intelligence. A classical problem is the balancing act between lagging or over-regulation and the speed of technology innovation. There are sufficient recent examples that inform a leaning more towards regulating AI. In the light of potential non-due-diligence or negligence in the rapid deployment of emerging technologies, one could say that the greater risk is that AI may already be beyond human understanding. Share Improve this answer Follow answered 10 hours ago Dr. Umesh Shookan 1 New contributor Dr. Umesh Shookan is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct . $\endgroup$
The "risks associated with regulating Artificial Intelligence" is that it can limit innovation, be uninformed and non-current, and reduce competitive advantages. These are heavy risks to companies that lose the opportunity to decrease costs, increase productivity, maintain or improve market share etc. Depending on the company size, maturity, and industry positioning; survival might depend on innovation or cost-optimisation. A company may face downsizing or closure because their competitors are out-competing them. The opposing question is what are the risks associated with decreasing regulating Artificial Intelligence. A classical problem is the balancing act between lagging or over-regulation and the speed of technology innovation. There are sufficient recent examples that inform a leaning more towards regulating AI. In the light of potential non-due-diligence or negligence in the rapid deployment of emerging technologies, one could say that the greater risk is that AI may already be beyond human understanding.
The "risks associated with regulating Artificial Intelligence" is that it can limit innovation, be uninformed and non-current, and reduce competitive advantages. These are heavy risks to companies that lose the opportunity to decrease costs, increase productivity, maintain or improve market share etc. Depending on the company size, maturity, and industry positioning; survival might depend on innovation or cost-optimisation. A company may face downsizing or closure because their competitors are out-competing them.
The opposing question is what are the risks associated with decreasing regulating Artificial Intelligence. A classical problem is the balancing act between lagging or over-regulation and the speed of technology innovation. There are sufficient recent examples that inform a leaning more towards regulating AI. In the light of potential non-due-diligence or negligence in the rapid deployment of emerging technologies, one could say that the greater risk is that AI may already be beyond human understanding.
Share Improve this answer Follow answered 10 hours ago Dr. Umesh Shookan 1 New contributor Dr. Umesh Shookan is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct .
Share Improve this answer Follow answered 10 hours ago Dr. Umesh Shookan 1 New contributor Dr. Umesh Shookan is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct .
Share Improve this answer Follow
Share Improve this answer Follow
Share Improve this answer Follow
answered 10 hours ago Dr. Umesh Shookan 1 New contributor Dr. Umesh Shookan is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct .
answered 10 hours ago Dr. Umesh Shookan 1
answered 10 hours ago
answered 10 hours ago
New contributor Dr. Umesh Shookan is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct .
Dr. Umesh Shookan is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct .
Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions philosophy social risk-management See similar questions with these tags.
Start asking to get answers Find the answer to your question by asking. Ask question
Start asking to get answers Find the answer to your question by asking. Ask question
Start asking to get answers
Find the answer to your question by asking.
Explore related questions philosophy social risk-management See similar questions with these tags.
Explore related questions philosophy social risk-management See similar questions with these tags.
Explore related questions
philosophy social risk-management
See similar questions with these tags.
Related 12 What is wrong with the idea that the AI will be capable of omniscience? 6 Is there a theoretical maximum for intelligence? 10 Are AI winters inevitable? 10 When the AI singularity takes over, what will there be left for us to do? 0 Are there any visual elements commonly associated with AI? 3 Should we focus more on societal or technical issues with AI risk 11 What are the common myths associated with Artificial Intelligence? 6 "AI will kill us all! The machines will rise up!" - what is being done to dispel such myths? 2 What are the societal risks associated with AI? 2 What assumptions are made when positing the emergence of superintelligence? Hot Network Questions Is there a general formula for the density of a given substance at a given temperature and pressure? Capacitance of a film capacitor What missile is this boy climbing on? пробить на (meaning in context) How to reconcile 2 Corinthians 5:10 with God “remembering sins no more”? Different staves of the same measure have different time signatures plain TeX to LaTeX conversion How to rotate the watermark text in a tcolorbox? xint: calc array with binomial coefficients only one time Negative commands being in the imperfective Do any Buddhist texts support a non-theistic grounding for objective moral truths? Latest openssh server security patch help (1:9.6p1-3ubuntu13.15) Is this pre-rolled character's damage value following the rules? If Everything Is Empty, Is That Claim Empty Too? The Madhyamaka Self-Refutation Problem Why doesn't Earth's atmosphere glow? How to replace the battery for a prop/costume Doctor Who (11th) sonic screwdriver Does a True Polymorphed Simulacrum turn to snow when its Polymorphed form drops to 0 hit points? Is the first "r" in "February" now considered a silent letter? How to remove or make transparent the black border around the collection icon in the Outliner Density of commutators in traceless matrices with norm bounds In graphical linear algebra, show half of two equals one How can Initials with special characters (e.g. German Umlaute) be designed? Proper torque damages wire - should I still apply that torque? Interpretation of Daniel 2:40 more hot questions Question feed
Related 12 What is wrong with the idea that the AI will be capable of omniscience? 6 Is there a theoretical maximum for intelligence? 10 Are AI winters inevitable? 10 When the AI singularity takes over, what will there be left for us to do? 0 Are there any visual elements commonly associated with AI? 3 Should we focus more on societal or technical issues with AI risk 11 What are the common myths associated with Artificial Intelligence? 6 "AI will kill us all! The machines will rise up!" - what is being done to dispel such myths? 2 What are the societal risks associated with AI? 2 What assumptions are made when positing the emergence of superintelligence?
12 What is wrong with the idea that the AI will be capable of omniscience? 6 Is there a theoretical maximum for intelligence? 10 Are AI winters inevitable? 10 When the AI singularity takes over, what will there be left for us to do? 0 Are there any visual elements commonly associated with AI? 3 Should we focus more on societal or technical issues with AI risk 11 What are the common myths associated with Artificial Intelligence? 6 "AI will kill us all! The machines will rise up!" - what is being done to dispel such myths? 2 What are the societal risks associated with AI? 2 What assumptions are made when positing the emergence of superintelligence?
12 What is wrong with the idea that the AI will be capable of omniscience?
6 Is there a theoretical maximum for intelligence?
10 Are AI winters inevitable?
10 When the AI singularity takes over, what will there be left for us to do?
0 Are there any visual elements commonly associated with AI?
3 Should we focus more on societal or technical issues with AI risk
11 What are the common myths associated with Artificial Intelligence?
6 "AI will kill us all! The machines will rise up!" - what is being done to dispel such myths?
2 What are the societal risks associated with AI?
2 What assumptions are made when positing the emergence of superintelligence?
Hot Network Questions Is there a general formula for the density of a given substance at a given temperature and pressure? Capacitance of a film capacitor What missile is this boy climbing on? пробить на (meaning in context) How to reconcile 2 Corinthians 5:10 with God “remembering sins no more”? Different staves of the same measure have different time signatures plain TeX to LaTeX conversion How to rotate the watermark text in a tcolorbox? xint: calc array with binomial coefficients only one time Negative commands being in the imperfective Do any Buddhist texts support a non-theistic grounding for objective moral truths? Latest openssh server security patch help (1:9.6p1-3ubuntu13.15) Is this pre-rolled character's damage value following the rules? If Everything Is Empty, Is That Claim Empty Too? The Madhyamaka Self-Refutation Problem Why doesn't Earth's atmosphere glow? How to replace the battery for a prop/costume Doctor Who (11th) sonic screwdriver Does a True Polymorphed Simulacrum turn to snow when its Polymorphed form drops to 0 hit points? Is the first "r" in "February" now considered a silent letter? How to remove or make transparent the black border around the collection icon in the Outliner Density of commutators in traceless matrices with norm bounds In graphical linear algebra, show half of two equals one How can Initials with special characters (e.g. German Umlaute) be designed? Proper torque damages wire - should I still apply that torque? Interpretation of Daniel 2:40 more hot questions
← Вернуться к списку
What are the risks associated with regulating AI? - Какие риски связаны с регулированием ИИ?
Краткое содержание
As part of a research project for college, I would like to understand what many of you astern to be the risks associated with regulating Artificial Intelligence. Such as whether regulation is too risky in regards to limiting progress or too risky in regards to uninformed regulation.