• 13 Posts
  • 45 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle



  • Ja, das wüsste ich auch gern, zumal mir der Einsatz von Palantir bereits mit der deutschen Verfassung unvereinbar scheint (etwa Grundrecht auf informationelle Selbstbestimmung -> vom Bundesverfassungsgericht aus Art. 2 Abs. 1 i.V.m. Art. 1 Abs. 1 GG entwickelt).

    Eben deshalb: Ich vermute, dass dieses Ansinnen weit über die übliche Inkompetenz oder Lobbyismus-Bemühungen vonseiten Thiel hinausgeht - ich kann es mir nur so erklären, dass unsere Politik-Spezis dafür bezahlt werden, die Rechte derjenigen zu untergraben, die sie eigentlich vertreten sollen. Falls nicht das, scheint es zumindest so, als ob sie dafür sorgen sollen, dass Abermillionen ohne irgendeinen Nutzen an Palantir verschwendet werden (bereits der Fall etwa in Bayern oder BW), denn diese Software ist vollkommen nutzlos, wenn sie nicht mit Daten gefüttert wird, die dann zusammengeführt werden - und das ist eben weder in Deutschland noch auf europäischer Ebene zulässig.

    Wie gesagt: Anders als mit Korruption ist es für mich nicht zu erklären, weil sowohl die Rechtslage völlig eindeutig als auch das Sicherheitsrisiko in Sachen ausländische Spionage enorm ist.

    Ist natürlich verschwörungstheoretisch und nicht zu belegen, aber bei diesen Leuten wundert mich wirklich gar nichts mehr.


  • Unsere Politiker sind einfach absurd.

    Palantir ist grundsätzlich nicht mit europäischem Recht (etwa GDPR) vereinbar - außerdem ist es eine US-Anwendung, die Tür und Tor für die Bespitzelung von EU-Bürgern (auch) durch die USA öffnet und Wirtschaftsspionage in nie da gewesener Größenordnung ermöglicht - beides auch noch völlig ohne Not.

    Es ist kriminell, eine Anwendung, die (illegitimerweise) derart tief in die Bürgerrechte eingreift, auch noch ausgerechnet in die Hände von Faschisten legen zu wollen, die belegtermaßen keinerlei Interesse an demokratischen Strukturen haben.

    Wie man gerade in der aktuellen Situation auch nur darüber nachdenken kann, ist für mich nur noch mit Korruption zu erklären, weil es dermaßen hirnrissig ist und den Interessen der Europäer nicht stärker zuwiderlaufen könnte.



  • Thank you for your insightful comment -and also for your commitment!

    I think that this issue - if it is relevant at all - needs to be solved by the developers: for example, by prohibiting to delete posts once the post has received a certain number of comments/upvotes/downvotes, but at the same time still allowing the user name to be removed (which is technically difficult, of course).

    If that’s possible (can’t say for sure), then I’d go for that. Anything else would be punishing those who post here in the first place, and I think that should be avoided at all costs.

    Everyone should retain sovereignty over their posts, but I think there can be a certain level of interest at which personal posts become somewhat public property. Where that lies can certainly be determined by the community, but it is definitely also a technical question - and probably a difficult one not only but just because of edits to the OP-Post.





  • Yes, that’s right: LLMs are definitely sold that way: “Save on employees because you can do it with our AI”, which sounds attractive to naive employers because personnel costs are the largest expense in almost every company.

    And that’s also true: it obscures what LLMs can actually do and where their value lies: this technology is merely a tool that workers in almost any industry can use to work even more effectively - but that’s apparently not enough of an USP: people are so brainwashed that they eat out of the marketing people’s hands because they hear exactly what they want to hear: I don’t need employees anymore because now there are much cheaper robot slaves.

    In my opinion, all of this will lead to a step backward for humanity because it will mean that lots and lots of artists, scientists, journalists, writers, even Administrative staff and many other essential elements of society will no longer be able to make a living from their profession.

    In the longer term, it will lead to the death of innovation and creativity because it will no longer be possible to make a living from such traits - AI can’t do any of that.

    In other words, AI is the wet dream of all those who do not contribute to value creation but (strangely enough) are paid handsomely to manage the wonderful work of those who actually do contribute to value creation.

    Unfortunately, it was to be expected how this technology would be used, because sadly, in most societies, the focus is not on contributing to society, but on who has made the most money from these contributions, which in the vast majority of cases is not the person who made the contribution. The use of AI is also based on this logic – how could it be otherwise?


  • Indeed. A major problem with LLMs is the marketing term “artificial intelligence”: it gives the false impression that these models would actually understand their output, which is not the case - in essence, it is more of a probability calculation based on what is available in the training data and what the user asks - it’s a kind of collage of different pieces of info from the training data that gets mixed and arranged in a new way based on the query.

    As long as it’s not a prompt that conflicts directly with the data set (“Explain why the world is flat”), you get answers that are relevant to the question - however, LLMs are neither able to decide on their own whether one source is more credible than another, nor can they make moral decisions because they do not “think,” but are merely another kind of search engine so to speak.

    However, the way many users use LLMs is more like a conversation with a human being – and that’s not what these models are; it’s just how they’re sold but not at all what they are designed to do or what they are capable of.

    But yes, this will be a major problem in the future as most models are controlled by billionaires that do not want them to be what they should be: Tools that help parsing great amounts of Information. They want them to be propaganda machines. So as with other Technologies: Not AI ist the problem but the ruthless way in which this technology is being used (by greedy wheelers and dealers).



  • But we don’t have tech giants like Microsoft, Apple, Meta, Oracle, PayPal and so on, who leave back doors open for the US government.

    Absurdly, however, we have politicians who are in the process of completely disclosing even the most sensitive information about each and every EU citizens to a US company: namely Palantir, which is already in use in the German state of Bavaria, for example. I can only explain this with corruption, because these politicians should know that the US government, currently a bunch of criminals, can access this data at any time.

    What I mean is that we need to be cautious not only about Chinese technology, such as Huawei’s radio masts, but also about US technology - for exactly the same reason.

    Unfortunately, Europe is currently in a weak position, and this is the result of decades of one-sided orientation toward the US, which means that we are still dependent on them.



  • That sounds reasonable, but we should definitely not count on the US. Instead, we should prepare ourselves for a situation in which the US is also led by a ruthless despot and even sides with Russia.

    It’s not as if that’s unrealistic, which is why I don’t understand why it shouldn’t be a possible scenario.

    The US is no longer an ally of the EU; the current US government leaves no doubt about that. Even Germany and Poland, which have traditionally had close ties to the US, should not be under the illusion that Trump and his henchmen have any interest in preserving democratic structures. It should be obvious by now that this administration cannot be trusted as it passes off organized crime as the legitimate actions of a state - in this respect, Trump’s syndicate is hardly any different from Putin’s - they have ties and a lot in common.