Algorithmic violence and automated decisions-making


  • Gabriel Cemin Petry
  • Haide Maria Hupffer


Violence; Data protection; Fundamental rights.


The article aims to discuss the possibility of automated decisions consisting of acts of violence, as well as possible legal responses to this problem. For that, the hypothetical deductive method of investigation is chosen, through bibliographic research. Violence, as outlined by Byung-Chul Han, is something that does not disappear over time, but rather reinvents and transforms itself. In the context of a hyperconnected society, decision-making through algorithms or Artificial Intelligence is
an increasingly frequent reality, in the most varied areas, such as health, security, education and finance. The algorithm, alone, is not capable of promoting an act of violence, however, it can instrumentalize it, since it is capable of guiding critical decisions in the war sector, for example, as well as enhancing and integrating models that subject individuals to standards and imposed scores, which sometimes consist of unfair discriminatory practices. Considering that the processing of personal data is part of the decision-making process, legislation aimed at data protection becomes applicable, such as the European GDPR and the Brazilian LGPD, which, in addition to establishing principles, obligations and rights, ensure the individual the right of non-submission and review of the automated decision.