“We don’t need to automate jobs, if we don’t want to”

Are we ready for smart machines? This Monday Effective Altruism HSG organized a public discussion on the next wave of automation and its economical as well as ethical implications. While the panelists remained confident that the AI revolution is not immediately around the corner, they agreed that more is needed. More research. More experimentation. More visions. More transparency. And maybe, also more caution.

Paulina Widmer, the president of EA@HSG, started the event with a brief welcome note, followed by Kaspar Etter, computer scientist, entrepreneur and the moderator of the panel, who gave a short introduction about machine learning and how (increasingly intelligent) „software is eating the world“, before opening the discussion. AI and Industrie 4.0 expert Prof. Dr. Jana Koehler seconded that observation and explained that the „4th Industrial Revolution“ means that all industries move towards an information society and computer science becomes the core of almost all products and services. The areas where AI will emerge much stronger in the near future are mainly areas involving natural language understanding as real-time translation, digital assistants or legal research. „Real”, humanoid robots on the other hand are still far away, because the mechanical engineering part is comparatively underdeveloped. She does expect this technological development to be rather continuous, nevertheless, the economic and social system will have to adapt to this new environment. To ensure a smooth transition Koehler would encourage institutions such as the HSG and the economic sciences in general to do more future-oriented research and she added: “We don’t need to automate jobs, if we don’t want to, but we need an economic system that gives us a choice, at the moment we don’t have the choice.”

koehleriselin2
Prof. Dr. Jana Koehler and David Iselin (Photo: Oscar Hong)

As the economist in the panel, David Iselin, explained, the current data shows no technological unemployment and the capital to labor ratio in Switzerland is stable. However, what can be seen are signs of a digital divide with winner-takes-it-all-winners on the one side and „bullshit jobs“ on the other side. In terms of readiness for automation lifelong learning and the removal of dead ends are very important, so that the labor force can adapt to new technologies. When Kaspar Etter investigated about medium to long-term forecasts Iselin remained very cautious however, as there are simply too many uncertainties.

A basic income for all?

One possible solution to automation and the threat of technological unemployment that is brought up fairly often, is the idea of a universal basic income (UBI). Enno Schmidt, the co-founder of the initiative „Für ein bedingungsloses Grundeinkommen“ hopes to introduce exactly that to Switzerland with the vote on June 5th. However, for him UBI is not necessarily tied to technological development, he sees it more in a line with the abolition of slavery, the suffrage movement or the introduction of old-age pension as one step further in terms of social progress. What makes UBI such a beautiful idea to him is the freedom it conveys. As he explains the initiative isn’t against work at all, but would provide a baseline so that people can pursue work in which they find purpose.

ennoschmidt2
Enno Schmidt (Photo: Oscar Hong)

However, if and how such a basic income could function and whether it leads to more or less purpose is contested. Etter mentioned the possibility of capital flight as recently highlighted by the panama papers and as Iselin alluded this new system could lead to inflation that „eats“ the basic income away. The fourth panelist, the philosopher and president of the Effective Altruism Foundation Adriano Mannino argued for a more experimental and gradual approach to such questions. In the end this is an empirical question, so, different models should be tried out and evaluated in small scale political experiments, rather than changing the whole economic system very abruptly.

Automated decision-making

A challenge that arises independently of labor market concerns is the increasing scope of agency that algorithms have. Adriano Mannino highlighted that this brings with it legal and moral questions of who is accountable for the decisions that such a system makes and what morals we encode in such systems, for example, in the case of “trolley problems”. In longer terms we also have to ask questions about the rights of digital intelligences. When does an AI have a right to live?

mannino2
Adriano Mannino (Photo: Oscar Hong)

For Koehler it’s also very important that we achieve transparency about the throughput of digital systems, the way how a decision was made. On the one hand, some companies as Google or Facebook have massive power in terms of what the user sees and what not; on the other hand, neural networks, even if they surpass human accuracy at a task, are not infallible, especially when confronted with unexpected types of information. Complex interactions could lead to accidents with potentially grave consequences as for example seen in the Flash Crash of 2010. Therefore, AI needs to have robust architecture with transparency and ways to verify the validity of outcomes.

The big picture

While the social impact of automation on the job market cannot be completely ignored, Mannino also made the case that focusing on longer-term risks such as the existential risk from artificial general intelligence (AGI) would be more effective in terms of the expected value, even if we assign it a very low probability, due to its astronomical impact. Our brain shows that AGI can be done in principle and most experts agree that AGI will emerge within this century. Mannino also gave the example of Google Deepmind, which after finishing 70s and 80s games already started to master early 3D-games from  the 90s solely by pixel input, with the real world being essentially nothing but a very complicated 3D-game. Due to the economic incentives, currently, almost all money goes into capacity building and very little goes into safety research. Therefore, he argued, it might be wise for states to finance AI Safety research. After all, we all like living and with AGI there is a very real possibility that we have to get it right the first time, because there won’t be a second time.


Leave a Reply

Your email address will not be published. Required fields are marked *

*

*

*