Transformacja cyfrowa
#TheFutureIsYours Kształtowanie cyfrowej przyszłości Europy
A global cooperation on AI
Artificial intelligence (AI) seems to be considered by many countries only as a strategic economic industry, where the goal is to develop AI faster than the others.
But some people* in the field are worried that AIs could, in the next decades, become more intelligent than humans, and, if not carefully designed, could become an existential risk; that is, a threat to humanity in general.
The problem is : we don't know how to "carefully design" an AI that is more intelligent than us, nor when (or even if) we will know it, nor what amount of work will be necessary.
For this reason, it is important for major powers (including the EU) to meet and cooperate extensively to make sure that, if a smarter-than-human AI appears, it shall be for the good of humanity.
Additionally, the EU should support work on safe AI, that is currently almost only made in the US.
Humanity has shown, through climate change, its ability to create threats to itself; but unsafe AIs could be way more difficult (or literally impossible) to stop. Let's make sure we solve the problem before it is too late.
*If interested, see, for example :
– For a short outline :
Stuart Armstrong, Smarter than us (free on the internet)
– For more details :
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control.
Nick Bostrom, Superintelligence.
Powiązane wydarzenia
La Transizione digitale
Zatwierdzony przez
i jeszcze 23 osoby / osób (rozwiń) (ukryj)
i jeszcze 24 osoby / osób (rozwiń) (ukryj)
Odcisk palca
Poniższy fragment tekstu jest skrócony i stanowi jedynie zaszyfrowane przedstawienie treści. Należy upewnić się, czy nie doszło ingerencji w treść, ponieważ jedna modyfikacja skutkuje zupełnie odmienną wartością.
Wartość:
ac347f85f2d7e4856e9c9efccb2d580d198e149604d569da23e35925a4877fd1
Źródło:
{"body":{"fr":"Artificial intelligence (AI) seems to be considered by many countries only as a strategic economic industry, where the goal is to develop AI faster than the others.\nBut some people* in the field are worried that AIs could, in the next decades, become more intelligent than humans, and, if not carefully designed, could become an existential risk; that is, a threat to humanity in general.\n\nThe problem is : we don't know how to \"carefully design\" an AI that is more intelligent than us, nor when (or even if) we will know it, nor what amount of work will be necessary.\n\nFor this reason, it is important for major powers (including the EU) to meet and cooperate extensively to make sure that, if a smarter-than-human AI appears, it shall be for the good of humanity.\nAdditionally, the EU should support work on safe AI, that is currently almost only made in the US.\n\nHumanity has shown, through climate change, its ability to create threats to itself; but unsafe AIs could be way more difficult (or literally impossible) to stop. Let's make sure we solve the problem before it is too late.\n\n*If interested, see, for example :\n– For a short outline :\nStuart Armstrong, Smarter than us (free on the internet)\n– For more details :\nStuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control.\nNick Bostrom, Superintelligence.","machine_translations":{"bg":"Изкуственият интелект (ИИ) изглежда се счита от много държави само за стратегически икономически сектор, където целта е да се развива ИИ по-бързо от останалите. Цели някои хора * на място са обезпокоени, че при следващата смърт AIS би могла да бъде по-интелигентна от хората и, ако не бъде определена, би могла да бъде до екзистенциален риск; това е заплаха за човечеството като цяло. Проблемът е: не знаем как да „добре проектираме“ ИИ, който е по-интелигентен от нас, или кога (или дори ако) ще го знаем или какво количество работа ще бъде необходимо. Поради тази причина е важно основните сили (включително ЕС) да се срещат и да си сътрудничат активно, за да се гарантира, че ако се появи по-интелигентен от човека изкуствен интелект, той следва да бъде за доброто на човечеството. И накрая, ЕС следва да подкрепя работата по безопасен ИИ, който понастоящем се извършва само в САЩ. Чрез изменението на климата човечеството показа способността си да създава заплахи за себе си; насочването на опасни AIS може да бъде начин да спрете повече (или определено невъзможно). Нека се уверим, че разрешаваме проблема, преди той да е твърде късно. * Ако проявявате интерес, вижте например: —За къса линия: Stuart Armstrong, по-интелигентен от нас (безплатно в интернет) – за повече подробности: Stuart Russell, Human Compatible: Изкуственият интелект и проблемът с контрола. Ник Бострьом, суперразузнаване.","cs":"Mnoho zemí považuje umělou inteligenci za strategické hospodářské odvětví, jehož cílem je rozvíjet umělou inteligenci rychleji než ostatní. Zaměřit se na některé lidi* v terénu se obávají, že by systém AIS mohl při příští smrti kádinku inteligentnější než lidský, a pokud by nebyl definován, mohl by jít nad rámec existenčního rizika; to je hrozba pro lidstvo obecně. Problémem je: nevíme, jak „pečlivě koncipovat“ umělou inteligenci, která je inteligentnější než my, nebo kdy (nebo i když) ji známe, nebo kolik práce bude zapotřebí. Z tohoto důvodu je důležité, aby se hlavní velmoci (včetně EU) scházeli a intenzivně spolupracovali, aby se zajistilo, že umělá inteligence, která je inteligentnější než lidská, by měla být prospěšná pro lidstvo. V neposlední řadě by EU měla podporovat činnost zaměřenou na bezpečnou umělou inteligenci, která je v současné době prováděna pouze v USA. Lidstvo prokázalo prostřednictvím změny klimatu svou schopnost vytvářet hrozby pro sebe; nebezpečný systém AIS by mohl být více (či rozhodně nemožný) zastavit. Dbáme na to, abychom problém vyřešili dříve, než bude příliš pozdě. * Pokud máte zájem, viz například: Na krátké trati: Stuart Armstrong, Smarter than my (bezplatně na internetu) - více informací: Stuart Russell, humánní kompatibilita: Umělá inteligence a problém kontroly. Nick Bostrom, superintelligence.","da":"Kunstig intelligens (AI) synes i mange lande kun at blive betragtet som en strategisk økonomisk industri, hvor målet er at udvikle kunstig intelligens hurtigere end de andre. At tilstræbe, at nogle mennesker * på området er bekymrede for, at AIS ved den næste død kan være mere intelligent end mennesker og, hvis den ikke defineres, kan gå ud over en eksistentiel risiko; det er en trussel mod menneskeheden generelt. Problemet er: vi ved ikke, hvordan vi \"omhyggeligt designer\" en kunstig intelligens, der er mere intelligent end os, eller hvornår (eller hvis) vi ved det, eller hvor meget arbejde der vil være nødvendigt. Derfor er det vigtigt, at de store magter (herunder EU) mødes og samarbejder i vid udstrækning for at sikre, at AI, der er mere intelligent end menneskelig, bør være til gavn for menneskeheden. Endelig bør EU støtte arbejdet med sikker kunstig intelligens, som i øjeblikket kun udføres i USA. Menneskeheden har gennem klimaændringer vist sin evne til at skabe trusler mod sig selv; det kan være lettere (eller helt sikkert umuligt) at standse AIS, der ikke er sikker. Lad os sikre, at vi løser problemet, før det er for sent. * Hvis du er interesseret, se f.eks.: For en kort linje: Stuart Armstrong, Smarter end us (gratis på internettet) — Yderligere oplysninger: Stuart Russell, \"Human Compatible: Kunstig intelligens og kontrolproblematikken. Nick Bostrom, superintelligens.","de":"Artificial intelligence (AI) seems to be considered by many countries only as a strategic economic industry, where the goal is to develop AI faster than the others. But some people * in the field are worried that AIS could, in the next decades, become more intelligent than humans, and, if not carefully designed, could become an existential risk; that is, a threat to humanity in general. The problem is: we don’t know how to „carefully design“ an AI that is more intelligent than us, nor when (or even if) we will know it, nor what amount of work will be necessary. For this reason, it is important for major powers (including the EU) to meet and cooperate extensively to make sure that, if a smarter-than-human AI appears, it shall be for the good of humanity. Additionally, the EU should support work on safe AI, that is currently almost only made in the US. Humanity has shown, through climate change, its ability to create threats to itself; but unsafe AIS could be way more succult (or literally unmöglich) to stop. Let’s make sure we solve the problem before it is too late. * If interested, see, for example: — For a short outline: Stuart Armstrong, Smarter than us – For more details: Stuart Russell, Human Kompatible: Artificial Intelligence and the Problem of Control. Nick Bostrom, Super-Intelligence.","el":"Η τεχνητή νοημοσύνη (ΤΝ) φαίνεται να θεωρείται από πολλές χώρες μόνο ως στρατηγικός οικονομικός κλάδος, όπου στόχος είναι η ταχύτερη ανάπτυξη της ΤΝ από τις άλλες χώρες. Στόχος είναι ορισμένα άτομα * στον αγρό να ανησυχούν ότι το AIS θα μπορούσε, κατά τον επόμενο θάνατο, να γίνει πιο έξυπνο από τον άνθρωπο και, εάν δεν οριστεί, θα μπορούσε να αντιμετωπίσει υπαρξιακό κίνδυνο· αυτό συνιστά απειλή για την ανθρωπότητα εν γένει. Το πρόβλημα είναι: δεν γνωρίζουμε πώς να «σχεδιάζουμε προσεκτικά» μια ΤΝ που είναι πιο ευφυής από εμάς, ούτε πότε (ή ακόμη και αν) θα την ξέρουμε, ούτε ποιος όγκος εργασίας θα είναι απαραίτητος. Για τον λόγο αυτό, είναι σημαντικό οι μεγάλες δυνάμεις (συμπεριλαμβανομένης της ΕΕ) να συναντώνται και να συνεργάζονται εκτενώς για να διασφαλίσουν ότι, εάν εμφανιστεί μια ευφυέστερη από την ανθρώπινη ΤΝ, αυτό θα πρέπει να είναι προς το καλό της ανθρωπότητας. Τέλος, η ΕΕ θα πρέπει να στηρίξει τις εργασίες για την ασφαλή ΤΝ, οι οποίες επί του παρόντος πραγματοποιούνται μόνο στις ΗΠΑ. Η ανθρωπότητα έχει αποδείξει, μέσω της κλιματικής αλλαγής, την ικανότητά της να δημιουργεί απειλές για τον εαυτό της· ο στόχος της μη ασφαλούς AIS θα μπορούσε να είναι η μεγαλύτερη (ή σίγουρα αδύνατη) διακοπή της λειτουργίας του. Ας φροντίσουμε να επιλύσουμε το πρόβλημα πριν να είναι πολύ αργά. * Εάν ενδιαφέρεστε, βλέπε, για παράδειγμα: — Για μια μικρή γραμμή: Stuart Armstrong, Έξυπνα από εμάς (δωρεάν στο διαδίκτυο) — Για περισσότερες λεπτομέρειες: Stuart Russell, Human Compatible: Τεχνητή νοημοσύνη και το πρόβλημα του ελέγχου. Nick Bostrom, υπερνοημοσύνη.","en":"Artificial Intelligence (AI) seems to be considered by many countries only as a strategic economic industry, where the goal is to develop AI faster than the others. Aim some people * in the field are worried that AIS could, in the next death, beaker more intelligent than humans, and, if not defined, could beside an existential risk; that is, a threat to humanity in general. The problem is: we don't know how to “carefully design” an AI that is more intelligent than us, or when (or even if) we will know it, or what amount of work will be necessary. For this reason, it is important for major powers (including the EU) to meet and cooperate extensively to make sure that, if a smarter-than-human AI appears, it should be for the good of humanity. Finally, the EU should support work on safe AI, which is currently only made in the US. Humanity has shown, through climate change, its ability to create threats to itself; aim unsafe AIS could be way more (or definitely impossible) to stop. Let’s make sure we resolve the problem before it is too late. * If interested, see, for example: — For a short line: Stuart Armstrong, Smarter than us (free on the internet) — For more details: Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control. Nick Bostrom, superintelligence.","es":"La inteligencia artificial (IA) parece ser considerada por muchos países solo como una industria económica estratégica, cuyo objetivo es desarrollar la IA más rápidamente que los demás. Se preocupa a algunas personas * sobre el terreno que, en la próxima muerte, el AIS pueda precipitarse más inteligente que los seres humanos y, si no se define, podría ir más allá de un riesgo existencial; es decir, una amenaza para la humanidad en general. El problema es: no sabemos cómo «diseñar cuidadosamente» una IA más inteligente que nosotros, cuándo (o incluso si) la conocemos, o qué cantidad de trabajo será necesaria. Por este motivo, es importante que las grandes potencias (incluida la UE) se reúnan y cooperen ampliamente para asegurarse de que, si aparece una IA más inteligente que la humana, debe ser para el bien de la humanidad. Por último, la UE debe apoyar el trabajo sobre la IA segura, que actualmente solo se lleva a cabo en los Estados Unidos. La humanidad ha demostrado, a través del cambio climático, su capacidad de crear amenazas para sí misma; buscar AIS inseguro podría ser el modo de detenerse más (o definitivamente imposible). Vamos a asegurarnos de resolver el problema antes de que sea demasiado tarde. * Si está interesado, véase, por ejemplo: — Para una línea corta: Stuart Armstrong, Smarter que nosotros (libre en internet) — Para más información: Stuart Russell, Human Compatible: Inteligencia artificial y problema de control. Nick Bostrom, superinteligencia.","et":"Paljud riigid näivad arvavat, et tehisintellekt on strateegiline majandussektor, mille eesmärk on arendada tehisintellekti kiiremini kui teised. On mures selle pärast, et AIS võib järgmisel surmal kerjada inimestest intelligentsemalt ja kui seda ei määratleta, siis eksistentsiaalse ohu kõrval; see tähendab ohtu inimkonnale üldiselt. Probleem on järgmine: me ei tea, kuidas „hoolsalt kavandada“ tehisintellekti, mis on intelligentsem kui me, ega millal (või isegi kui) me seda teame või kui palju tööd on vaja teha. Seetõttu on oluline, et suurvõimud (sealhulgas EL) kohtuksid ja teeksid ulatuslikku koostööd tagamaks, et kui ilmneb arukam tehisintellekt kui inimlik tehisintellekt, oleks see inimkonna huvides. Lisaks peaks EL toetama ohutu tehisintellekti alast tööd, mida praegu tehakse ainult USAs. Inimkond on kliimamuutuste kaudu näidanud oma võimet tekitada iseendale ohtusid; ebaturvalise automaatse identifitseerimissüsteemi eesmärk võib olla suurem (või kindlasti võimatu) peatumine. Veendugem, et lahendame probleemi enne, kui see on liiga hilja. * Huvi korral vt näiteks: – – lühijooneline: Stuart Armstrong, arukam kui me (internetis tasuta) – Täpsem teave: Stuart Russell, Inimsobiv: Tehisintellekt ja kontrolliprobleem. Nick Bostrom, superintelligence.","fi":"Monet maat näyttävät pitävän tekoälyä vain strategisena talouden alana, jonka tavoitteena on kehittää tekoälyä nopeammin kuin muut. Joidenkin ihmisten * tavoittaminen alalla on huolissaan siitä, että AIS-järjestelmä voisi seuraavassa kuolemantapauksessa olla ihmishenkiä älykkäämpi, ja jos sitä ei ole määritelty, se saattaa jäädä elämäriskin ulkopuolelle; tämä on uhka ihmiskunnalle yleensä. Ongelmana on meillä ei ole tietoa siitä, miten voidaan ”suunnitella huolellisesti” tekoälyä, joka on älykkäämpi kuin meitä, tai milloin (tai vaikka) tiedämme sen tai kuinka paljon työtä tarvitaan. Tästä syystä on tärkeää, että suurvallat (myös EU) tapaavat ja tekevät laajaa yhteistyötä sen varmistamiseksi, että tekoälyn älykäsmpi käyttö edellyttää ihmiskunnan hyvää. EU:n olisi myös tuettava turvallista tekoälyä koskevaa työtä, jota tehdään tällä hetkellä vain Yhdysvalloissa. Ihmiskunta on osoittanut ilmastonmuutoksen kautta kykenevänsä luomaan itselleen uhkia; tavoitteena ei ole turvallinen AIS-järjestelmä, jota voitaisiin pysäyttää enemmän (tai varmasti mahdotonta). Meidän on varmistettava, että ongelma ratkaistaan ennen kuin se on liian myöhäistä. * Jos olet kiinnostunut, ks. esim. Lyhyitä linjoja varten: Stuart Armstrong, älykkäämpi kuin meillä (ilmainen internetissä) – Lisätietoja: Stuart Russell, inhimillinen yhteensopivuus: Tekoäly ja valvontaongelmat. Nick Bostrom, supertiedustelu.","ga":"Is cosúil nach measann go leor tíortha an Intleacht Shaorga (AI) mar thionscal eacnamaíoch straitéiseach, ina bhfuil sé mar aidhm intleacht shaorga a fhorbairt níos tapúla ná na cinn eile. Tá imní ar roinnt daoine * sa réimse go bhféadfadh AIS, sa chéad bhás eile, a bheith níos cliste ná daoine, agus, mura sainítear iad, d’fhéadfadh sé a bheith in aice le riosca atá ann cheana; is é sin, bagairt don chine daonna i gcoitinne. Is í an fhadhb: níl a fhios againn conas a “dearadh cúramach” an AI go bhfuil níos cliste ná dúinn, nó nuair (nó fiú má) beidh a fhios againn é, nó cén méid oibre a bheidh riachtanach. Ar an gcúis sin, tá sé tábhachtach go mbuailfeadh cumhachtaí móra (lena n-áirítear AE) le chéile agus go gcomhoibreoidís go forleathan chun a áirithiú gur ar mhaithe leis an daonnacht a bheadh an intleacht shaorga níos cliste ná an duine. Ar deireadh, ba cheart don Aontas tacú leis an obair ar intleacht shaorga shábháilte, nach ndéantar faoi láthair ach sna Stáit Aontaithe. Tá sé léirithe ag an gcine daonna, tríd an athrú aeráide, go bhfuil sé in ann bagairtí a chruthú dó féin; D’fhéadfadh AIS neamhshábháilte a bheith ar bhealach níos mó (nó dodhéanta) chun stop a chur leis. Déanaimis cinnte de go réiteofar an fhadhb sula mbeidh sé ródhéanach. * Más spéis leat, féach, mar shampla: —I gcás líne ghearr: Stuart Armstrong, Níos Cliste ná sinn (saor in aisce ar an idirlíon) – Le haghaidh tuilleadh sonraí: Stuart Russell, Comhoiriúnach don Duine: Intleacht Shaorga agus Fadhb Rialaithe. Nick Bostrom, superintelligence.","hr":"Čini se da mnoge zemlje smatraju da je umjetna inteligencija samo strateška gospodarska industrija u kojoj je cilj razvoj umjetne inteligencije brži od ostalih. Ciljati neke osobe * na terenu zabrinute su da bi AIS u sljedećoj smrti mogao biti inteligentniji od ljudi i, ako nije definiran, mogao bi biti popraćen egzistencijalnim rizikom; to je prijetnja čovječanstvu općenito. Problem je sljedeći: ne znamo kako pažljivo osmisliti umjetnu inteligenciju koja je inteligentnija od nas ili kada (ili čak ako) to znamo ili koliko će biti potrebno raditi. Zbog toga je važno da se velike sile (uključujući EU) u velikoj mjeri sastaju i surađuju kako bi se osiguralo da bi umjetna inteligencija, ako se pojavi pametnija od čovjeka, trebala biti za dobrobit čovječanstva. Naposljetku, EU bi trebao podupirati rad na sigurnoj umjetnoj inteligenciji, koji se trenutačno provodi samo u SAD-u. Čovječanstvo je kroz klimatske promjene pokazalo svoju sposobnost stvaranja prijetnji sebi; cilj nesigurne AIS-a mogao bi biti mnogo više (ili nemoguće) zaustaviti. Pobrinite se da problem riješimo prekasno. * Ako ste zainteresirani, vidjeti, na primjer: — Za kratki redak: Stuart Armstrong, Smarter than nas (besplatno na internetu) – Više pojedinosti: Stuart Russell, Human Compatible: Umjetna inteligencija i problem kontrole. Nick Bostrom, superinteligencija.","hu":"Úgy tűnik, hogy a mesterséges intelligenciát számos ország csak stratégiai gazdasági ágazatnak tekinti, amelynek célja a mesterséges intelligencia gyorsabb fejlesztése, mint a többi országé. Célozzon néhány embert * az aggodalomban, hogy az AIS a következő haláleset során intelligensebb lehet, mint az emberek, és ha nincs meghatározva, az egzisztenciális kockázat mellett is lehet; ez általában véve fenyegetést jelent az emberiségre. A probléma a következő: nem tudjuk, hogyan lehet „gondosan megtervezni” egy olyan MI-t, amely intelligensebb, mint nekünk, vagy mikor (vagy akkor is) ismerjük azt, vagy milyen mennyiségű munkára lesz szükség. Ezért fontos, hogy a nagyhatalmak (beleértve az EU-t is) találkozzanak és szorosan együttműködjenek annak biztosítása érdekében, hogy ha intelligensebb MI jelenik meg, akkor az az emberiség javát szolgálja. Végezetül az EU-nak támogatnia kell a biztonságos mesterséges intelligenciával kapcsolatos munkát, amelyet jelenleg csak az Egyesült Államokban végeznek. Az emberiség az éghajlatváltozás révén megmutatta, hogy képes önmagát fenyegető veszélyek előidézésére; a nem biztonságos AIS-t még inkább (vagy biztosan lehetetlennek) lehetne megállítani. Ügyeljünk arra, hogy megoldjuk a problémát, mielőtt az túl késő lenne. * Érdeklődés esetén lásd például: Rövid vonal esetében: Stuart Armstrong, intelligensebb, mint nekünk (ingyenes az interneten) – További részletek: Stuart Russell, Emberi Kompatibilis: Mesterséges intelligencia és az irányítás problémája. Nick Bostrom, szupertelligencia.","it":"L'intelligenza artificiale (IA) sembra essere considerata da molti paesi solo come un'industria economica strategica, in cui l'obiettivo è sviluppare l'IA più rapidamente degli altri. Puntano alcune persone * sul campo a temere che l'AIS possa, nella prossima morte, picchiare più intelligente degli esseri umani e, se non definito, accanto a un rischio esistenziale; si tratta di una minaccia per l'umanità in generale. Il problema è: non sappiamo come \"progettare attentamente\" un'IA più intelligente di noi, o quando (o anche se) lo sappiamo, o quale quantità di lavoro sarà necessaria. Per questo motivo è importante che le grandi potenze (compresa l'UE) si incontrino e cooperino ampiamente per garantire che, qualora emerga un'intelligenza artificiale più intelligente dell'uomo, ciò avvenga per il bene dell'umanità. Infine, l'UE dovrebbe sostenere i lavori in materia di IA sicura, attualmente realizzati solo negli Stati Uniti. L'umanità ha dimostrato, attraverso i cambiamenti climatici, la sua capacità di creare minacce per se stessa; mirare ad AIS non sicuro potrebbe essere più (o definitivamente impossibile) fermarsi. Facciamo in modo che risolviamo il problema prima che sia troppo tardi. * Se interessato, cfr. ad esempio: — Per una linea breve: Stuart Armrobust, Smarter than us (gratuito su Internet) — Per maggiori dettagli: Stuart Russell, Human Compatible: L'intelligenza artificiale e il problema del controllo. Nick Bostrom, superintelligenza.","lt":"Atrodo, kad dirbtinį intelektą daugelis šalių laiko tik strateginiu ekonomikos sektoriumi, kurio tikslas – kurti dirbtinį intelektą greičiau nei kitos. Atkreipti dėmesį į tai, kad kai kurie žmonės * šioje srityje nerimauja, kad per kitą mirtį AIS galėtų būti pažangesnė nei žmonės ir, jei ji neapibrėžta, šalia egzistencinės rizikos; tai yra grėsmė žmonijai apskritai. Problema: nežinome, kaip atsargiai kurti dirbtinį intelektą, kuris yra protingesnis nei mes, kada (ar net jei) mes jį žinosime arba koks darbo krūvis bus reikalingas. Dėl šios priežasties svarbu, kad didžiosios valstybės (įskaitant ES) susitiktų ir glaudžiai bendradarbiautų, siekdamos užtikrinti, kad, pasirodžius pažangesniam nei žmogiškasis dirbtinis intelektas, jis būtų žmonijos gėrybė. Galiausiai ES turėtų remti darbą, susijusį su saugiu dirbtiniu intelektu, kuris šiuo metu vykdomas tik JAV. Dėl klimato kaitos žmonija įrodė savo gebėjimą sau kelti grėsmę; nesaugios AIS tikslas galėtų būti daugiau (arba tikrai neįmanoma) sustabdyti. Užtikrinkime, kad problemą išspręstume dar per vėlai. * Jei susidomėjo, žr., pavyzdžiui: Trumpajai eilutei: Stuart Armstrong, pažangesnis už mus (nemokamas internete). Daugiau informacijos: Stuart Russell, tinkamas žmogui: Dirbtinis intelektas ir kontrolės problema. Nick Bostrom, superintelektas.","lv":"Šķiet, ka daudzas valstis mākslīgo intelektu (MI) uzskata tikai par stratēģisku ekonomikas nozari, kuras mērķis ir attīstīt MI ātrāk nekā pārējās. Daži cilvēki * uz vietas ir nobažījušies par to, ka nākamajā nāves reizē AIS varētu būt gudrāka nekā cilvēki, un, ja to nedefinētu, tas varētu būt blakus eksistenciālam riskam; tas ir drauds cilvēcei kopumā. Problēma ir šāda: mēs nezinām, kā “rūpīgi izstrādāt” MI, kas ir inteliģentāks nekā mēs, vai kad (vai pat tad, ja) mēs to zināsim, vai cik daudz darba būs vajadzīgs. Šā iemesla dēļ ir svarīgi, lai lielākās lielvaras (tostarp ES) tiktuies un plaši sadarbotos, lai nodrošinātu, ka, ja parādās viedāks nekā cilvēkam atbilstošs mākslīgais intelekts, tas būtu cilvēces labā. Visbeidzot, ES būtu jāatbalsta darbs pie droša mākslīgā intelekta, kas pašlaik tiek veikts tikai ASV. Cilvēce klimata pārmaiņu dēļ ir apliecinājusi savu spēju radīt draudus sev; nedrošas AIS mērķis varētu būt vēl vairāk (vai noteikti neiespējami) apturēt. Pārliecinieties, ka atrisināsim problēmu, pirms tā ir par vēlu. * Ja tas ir ieinteresēts, skatiet, piemēram: — Īsai līnijai: Stuart Armstrong, gudrāk nekā mēs (bez maksas internetā) – sīkākai informācijai: Stuart Russell, Human Compatible: Mākslīgais intelekts un kontroles problēma. Niks Bostroms, superintelģija.","mt":"L-intelliġenza Artifiċjali (IA) tidher li hija kkunsidrata minn ħafna pajjiżi biss bħala industrija ekonomika strateġika, fejn l-għan huwa li l-IA tiġi żviluppata aktar malajr mill-oħrajn. Jimmiraw lil xi persuni * fil-qasam huma mħassba li l-AIS tista’, fil-mewt li jmiss, tkun aktar intelliġenti mill-bnedmin, u, jekk ma tkunx definita, tista’ tmur lil hinn minn riskju eżistenzjali; jiġifieri, theddida għall-umanità b’mod ġenerali. Il-problema hija: ma nafx kif “jiddisinjaw b’attenzjoni” IA li hija aktar intelliġenti minna, jew meta (jew anki jekk) se nafuha, jew x’ammont ta’ xogħol se jkun meħtieġ. Għal din ir-raġuni, huwa importanti li s-setgħat ewlenin (inkluża l-UE) jiltaqgħu u jikkooperaw b’mod estensiv biex jiżguraw li, jekk tidher IA aktar intelliġenti minn dik umana, dan għandu jkun għall-ġid tal-umanità. Fl-aħħar nett, l-UE għandha tappoġġja l-ħidma fuq IA sikura, li bħalissa qed issir biss fl-Istati Uniti. L-umanità wriet, permezz tat-tibdil fil-klima, il-kapaċità tagħha li toħloq theddid għaliha nnifisha; l-għan ta’ AIS mhux sikura jista’ jkun aktar (jew definittivament impossibbli) biex tieqaf. Ejjew niżguraw li nsolvu l-problema qabel ma tkun tard wisq. * Jekk interessat, ara, pereżempju: — Għal linja qasira: Stuart Armstrong, Aktar Intelliġenti milli aħna (b’xejn fuq l-internet) — Għal aktar dettalji: Stuart Russell, Human Compatible: L-intelliġenza Artifiċjali u l-Problema tal-Kontroll. Nick Bostrom, superintelligence.","nl":"Kunstmatige intelligentie (KI) lijkt door veel landen alleen te worden beschouwd als een strategische economische sector, waar het doel is KI sneller te ontwikkelen dan de andere. Is bezorgd dat AIS bij de volgende dood intelligenter zou kunnen zijn dan mensen en, indien niet gedefinieerd, een existentieel risico zou kunnen vormen; dat is een bedreiging voor de mensheid in het algemeen. Het probleem is: we weten niet hoe we een KI „zorgvuldig moeten ontwerpen” die intelligenter is dan ons, of wanneer (of zelfs als) we dat zullen weten, of hoeveel werk nodig zal zijn. Daarom is het belangrijk dat grote machten (waaronder de EU) elkaar ontmoeten en uitgebreid samenwerken om ervoor te zorgen dat, als een KI slimmer dan mens lijkt, dit in het belang van de mensheid moet zijn. Tot slot moet de EU steun verlenen aan de werkzaamheden op het gebied van veilige KI, die momenteel alleen in de VS worden uitgevoerd. De mensheid heeft door de klimaatverandering laten zien dat zij in staat is zichzelf te bedreigen; het doel van onveilige AIS kan meer (of zeker onmogelijk) zijn om een einde te maken aan het AIS. Laten we ervoor zorgen dat we het probleem oplossen voordat het te laat is. * Indien u geïnteresseerd bent, zie bijvoorbeeld: Voor een korte lijn: Stuart Armstrong, slimmer dan ons (gratis op internet) — Voor meer details: Stuart Russell, Human Compatible: Kunstmatige intelligentie en het controleprobleem. Nick Bostrom, superintelligentie.","pl":"Wydaje się, że sztuczna inteligencja (SI) jest postrzegana przez wiele państw jedynie jako strategiczny przemysł gospodarczy, w którym celem jest rozwój sztucznej inteligencji szybciej niż pozostałe. Ukierunkowane na niektóre osoby* w tej dziedzinie obawiają się, że AIS mógłby w następnej śmierci zbęść bardziej inteligentnie niż ludzie, a jeśli nie został zdefiniowany, mógłby oprócz ryzyka egzystencjalnego; jest to zagrożenie dla ludzkości w ogólności. Problem polega na: nie wiemy, w jaki sposób „starannie zaprojektować” sztuczną inteligencję, która jest inteligentniejsza niż nas, kiedy (lub nawet jeśli) znamy tę sztuczną inteligencję lub jaką ilość pracy będzie potrzebna. Z tego względu ważne jest, aby główne potęgi (w tym UE) spotykały się i prowadziły szeroko zakrojoną współpracę, aby upewnić się, że sztuczną inteligentną inteligentną inteligentną sztuczną inteligentną inteligentną inteligentną inteligentną sztuczną inteligentną inteligentną inteligentną inteligentną sztuczną inteligentną inteligentną inteligentną sztuczną inteligencją. Ponadto UE powinna wspierać prace nad bezpieczną sztuczną inteligencją, które obecnie prowadzone są jedynie w USA. Dzięki zmianie klimatu ludzkość wykazała się zdolnością do tworzenia zagrożeń dla siebie; cel niezabezpieczonego AIS mógłby być bardziej (lub zdecydowanie niemożliwy) do zatrzymania. Upewnijmy się, że problem ten zostanie rozwiązany, zanim za późno. * Jeśli są Państwo zainteresowani, zob. na przykład: Dla linii krótkiej: Stuart Armstrong, Smarter than my (bezpłatne w internecie) - aby uzyskać więcej informacji: Stuart Russell, Human Compatible: Sztuczna inteligencja i problem kontroli. Nick Bostrom, superinteligence.","pt":"A inteligência artificial (IA) parece ser considerada por muitos países apenas como uma indústria económica estratégica, em que o objetivo é desenvolver a IA mais rapidamente do que os outros. Visar algumas pessoas * no terreno preocupadas com o facto de o AIS poder, na próxima morte, ser mais inteligente do que o ser humano e, se não for definido, poder ser associado a um risco existencial; trata-se de uma ameaça para a humanidade em geral. O problema é o seguinte: não sabemos como «conceber cuidadosamente» uma IA mais inteligente do que nós, quando (ou mesmo se) a seremos, ou que volume de trabalho será necessário. Por este motivo, é importante que as grandes potências (incluindo a UE) se reúnam e cooperem amplamente para garantir que, se surgir uma IA mais inteligente do que a humana, a mesma deve ser para o bem da humanidade. Por último, a UE deve apoiar os trabalhos em prol de uma IA segura, que atualmente só é feita nos EUA. A humanidade demonstrou, através das alterações climáticas, a sua capacidade para criar ameaças para si própria; o AIS não seguro pode ser mais fácil (ou certamente impossível) de parar. Assegure-se de que resolvemos o problema antes de este ser demasiado tarde. * Se estiver interessado, ver, por exemplo: — Para uma linha curta: Stuart Armstrong, Smarter do que nós (gratuito na Internet) — Para mais pormenores: Stuart Russell, Human Compatible: Inteligência artificial e problema de controlo. Nick Bostrom, superinteligência.","ro":"Inteligența artificială (IA) pare să fie considerată de multe țări doar ca o industrie economică strategică, al cărei obiectiv este de a dezvolta IA mai rapid decât celelalte. Vizarea unor persoane * din domeniu este îngrijorată de faptul că AIS ar putea, la următoarea moarte, să paheze mai inteligent decât oamenii și, dacă nu este definită, ar putea fi în fața unui risc existențial; aceasta este o amenințare la adresa umanității în general. Problema este: nu știm cum să „concepem cu atenție” o IA mai inteligentă decât noi, când (sau chiar dacă) o vom cunoaște sau ce volum de muncă va fi necesar. Din acest motiv, este important ca marile puteri (inclusiv UE) să se întâlnească și să coopereze pe scară largă pentru a se asigura că, dacă apare o IA mai inteligentă decât cea umană, ar trebui să fie pentru binele umanității. În cele din urmă, UE ar trebui să sprijine activitatea privind IA sigură, care este realizată în prezent doar în SUA. Omenirea și-a demonstrat, prin schimbările climatice, capacitatea de a crea amenințări la adresa sa; scopul AIS nesigur ar putea fi mai mult (sau chiar imposibil) de oprit. Să ne asigurăm că rezolvăm problema înainte de a fi prea târziu. * Dacă sunteți interesat, a se vedea, de exemplu: — Pentru o linie scurtă: Stuart Armstrong, Smarter than us (gratuit pe internet) — Pentru mai multe detalii: Stuart Russell, Human Compatible: Inteligența artificială și problema controlului. Nick Bostrom, superintelligence.","sk":"Zdá sa, že umelú inteligenciu považujú mnohé krajiny len za strategický hospodársky priemysel, ktorého cieľom je rozvíjať umelú inteligenciu rýchlejšie ako ostatné. Cieľom je, aby niektorí ľudia * v tejto oblasti boli znepokojení tým, že AIS by v ďalšej smrti mohol byť kadička inteligentnejšia ako ľudia, a ak nie je definovaná, mohla by byť popri existenčnom riziku; to je hrozba pre ľudstvo vo všeobecnosti. Problémom je: nevieme, ako „starostlivo navrhnúť“ umelú inteligenciu, ktorá je inteligentnejšia ako my, alebo kedy ju poznáme (alebo dokonca ak áno), alebo aké množstvo práce bude potrebné. Z tohto dôvodu je dôležité, aby sa veľké mocnosti (vrátane EÚ) stretávali a intenzívne spolupracovali s cieľom zabezpečiť, aby v prípade, že sa objaví inteligentnejšia ako ľudská umelá inteligencia, bola v prospech ľudstva. A napokon, EÚ by mala podporovať prácu na bezpečnej umelej inteligencii, ktorá sa v súčasnosti vykonáva len v USA. Ľudstvo preukázalo prostredníctvom zmeny klímy svoju schopnosť vytvárať hrozby pre seba; cieľom nebezpečnej AIS by mohlo byť viac (alebo určite nemožné) zastavenie. Dbajte na to, aby sme problém vyriešili ešte príliš neskoro. * Ak máte záujem, pozri napríklad: — Pre krátku trať: Stuart Armstrong, Smarter než my (slobodná na internete) – ďalšie podrobnosti: Stuart Russell, kompatibilný s človekom: Umelá inteligencia a problém kontroly. Nick Bostrom, superintelligence.","sl":"Zdi se, da mnoge države razmišljajo o umetni inteligenci le kot strateško gospodarsko panogo, kjer je cilj razviti umetno inteligenco hitreje kot druge. Prizadevati si je treba, da bodo nekateri ljudje * na terenu zaskrbljeni, da bi lahko sistem AIS ob naslednji smrti bolj inteligenten kot človek in, če ne bi bil opredeljen, bi lahko bil poleg eksistencialnega tveganja; to je grožnja človeštvu na splošno. Težava je: ne vemo, kako „pazljivo zasnovati“ umetno inteligenco, ki je pametnejša od nas, kdaj (ali celo če) jo bomo vedeli ali koliko dela bo potrebno. Zato je pomembno, da se velike sile (vključno z EU) srečujejo in obširno sodelujejo, da se zagotovi, da bo umetna inteligenca, ki je pametnejša od človeka, v korist človeštva. Poleg tega bi morala EU podpreti prizadevanja za varno umetno inteligenco, ki se trenutno izvaja le v ZDA. Človeštvo je s podnebnimi spremembami dokazalo svojo sposobnost, da sama sebi povzroča grožnje; cilj, ki ni varen AIS, bi lahko bil način, kako se bolj (ali zagotovo nemogoče) ustaviti. Poskrbimo, da bomo težavo rešili prepozno. * Če vas zanima, glej na primer: —Za kratko črto: Stuart Armstrong, Smarter kot nas (brezplačno na spletu) - za več podrobnosti: Stuart Russell, Human Compatible: Umetna inteligenca in problem nadzora. Nick Bostrom, superintelligence.","sv":"Artificiell intelligens (AI) tycks av många länder endast betraktas som en strategisk ekonomisk industri, där målet är att utveckla AI snabbare än de andra. Att vissa personer * på fältet är oroade över att AIS vid nästa död skulle kunna bägare mer intelligent än människor och, om det inte definieras, skulle kunna gå hand i hand med en existentiell risk, det är ett hot mot mänskligheten i allmänhet. Problemet är följande: vi vet inte hur man ”omsorgsfullt” utformar en AI som är intelligentare än oss, eller när (eller till och med om) vi kommer att känna till den, eller vilken mängd arbete som kommer att behövas. Därför är det viktigt att stormakterna (inklusive EU) träffas och samarbetar i stor utsträckning för att se till att det, om en intelligentisk AI som är intelligentare än människa uppstår, bör vara för mänsklighetens bästa. Slutligen bör EU stödja arbetet med säker AI, vilket för närvarande endast sker i USA. Mänskligheten har genom klimatförändringarna visat sin förmåga att skapa hot mot sig själv. att sikta på osäkra AIS kan vara way mer (eller definitivt omöjligt) att stoppa. Låt oss se till att vi löser problemet innan det är för sent. * Om du är intresserad, se t.ex. — För en kort rad: Stuart Armstrong, Smarter än oss (gratis på internet) – För mer information: Stuart Russell, mänsklig kompatibilitet: Artificiell intelligens och kontrollproblemet. Nick Bostrom, superintelligens."}},"title":{"fr":"A global cooperation on AI","machine_translations":{"bg":"Глобално сътрудничество в областта на оценката на въздействието","cs":"Globální spolupráce v oblasti posuzování dopadů","da":"Et globalt samarbejde om konsekvensanalyse","de":"A Global Cooperation on AI","el":"Παγκόσμια συνεργασία για την εκτίμηση επιπτώσεων","en":"A global cooperation on IA","es":"Una cooperación mundial en materia de evaluación de impacto","et":"Ülemaailmne koostöö mõjuhindamise valdkonnas","fi":"Maailmanlaajuinen yhteistyö vaikutustenarvioinnin alalla","ga":"Comhar domhanda maidir le measúnú tionchair","hr":"Globalna suradnja u području procjene učinka","hu":"Globális együttműködés a hatásvizsgálat terén","it":"Una cooperazione globale in materia di VI","lt":"Visuotinis bendradarbiavimas poveikio vertinimo srityje","lv":"Globāla sadarbība ietekmes novērtējuma jomā","mt":"Kooperazzjoni globali dwar il-VI","nl":"Wereldwijde samenwerking op het gebied van effectbeoordeling","pl":"Globalna współpraca w zakresie oceny skutków","pt":"Uma cooperação global em matéria de avaliação de impacto","ro":"O cooperare globală privind evaluarea impactului","sk":"Globálna spolupráca v oblasti posúdenia vplyvu","sl":"Globalno sodelovanje na področju ocene učinka","sv":"Ett globalt samarbete om konsekvensbedömning"}}}
Ten odcisk palca jest liczony przy pomocy algorytmu mieszającego SHA256. Aby samodzielnie go zreplikować, można skorzystać z Internetowy kalkulator MD5 i skopiować oraz wkleić dane źródłowe.
Udostępnij:
Link udostępniania:
Prosimy o wklejenie tego kodu na swoją stronę:
<script src="https://futureu.europa.eu/processes/Digital/f/15/proposals/255/embed.js?locale=pl"></script>
<noscript><iframe src="https://futureu.europa.eu/processes/Digital/f/15/proposals/255/embed.html?locale=pl" frameborder="0" scrolling="vertical"></iframe></noscript>
Zgłoś niestosowną treść
Czy ta treść jest niestosowna?
- Zadzwoń do nas pod numer 00 800 6 7 8 9 10 11
- Skorzystaj z innych form kontaktu telefonicznego
- Napisz do nas, korzystając z formularza
- Spotkaj się z nami w lokalnym biurze UE
- Parlament Europejski
- Rada Europejska
- Rada Unii Europejskiej
- Komisja Europejska
- Trybunał Sprawiedliwości Unii Europejskiej
- Europejski Bank Centralny (EBC)
- Europejski Trybunał Obrachunkowy
- Europejska Służba Działań Zewnętrznych (ESDZ)
- Europejski Komitet Ekonomiczno-Społeczny (EKES)
- Europejski Komitet Regionów (KR)
- Europejski Bank Inwestycyjny (EBI)
- Europejski Rzecznik Praw Obywatelskich
- Europejski Inspektor Ochrony Danych (EIOD)
- Europejska Rada Ochrony Danych
- Europejski Urząd Doboru Kadr
- Urząd Publikacji Unii Europejskiej
- Agencje
Proszę się zalogować
Możesz uzyskać dostęp do platformy z konta zewnętrznego
Liczba komentarzy: 29
Konwersacje
Hi,
The human brain is the most complex structure in the know universe. It comprises about 86 billion neurons.
The largest artificial neural networks comprise about 20 million simplifications of neurons. We are nowhere near having AI at 'human levels'. We are at the level of a simplified frog. Still a capable animal, but we don't fear we will be taken over by frogs. Or elephants.
I can understand that some are in awe about what contemporary neural networks can achieve. But neural network that can recognize a magpie in a picture cannot open the door. It does not even know what a door is. in fact, it does not even know what a magpie is. In fact, the neural network does not 'think' at all. it is a mathematical approximation method. A very capable one, but its ability should not be overrated.
Even if we had a quantum computer that could approach the complexity of the human brain, it will still be confined to the existence of computer software. Which relies on, for one, a power supply.
Hi :) !
Indeed, as you say, AI is very far from having a (general) human level; but it is very difficult to know how technology will look like in, say, 50 years (think of AI in 1971). Due to the risks, it is natural not to wait until the last minute to make sure it is safe.
Many AI researchers actually think it is quite probable for human-level AI to appear in the next 50 years (though they have wildly different opinions and have been mistaken in the past, in both ways; it just shows how unpredictable AI progress is); see, for example : https://arxiv.org/abs/1705.08807v1
(By the way, something that cannot think can still be an existential risk; atomic bombs don't think, but, once launched, kill; climate change doesn't think, but, once started, kills. That said, I'm not afraid of today's AIs.)
1500 characters are too short to justify everything, and that's why I added references at the end of my idea. If you don't have time, please consider at least reading https://bit.ly/3egCtXU
First paragraph, i agree.
Second paragraph, human level AI. In specific domains or equal to the human brain? We already have human level AI in playing Go, that is not future. AlphaGo cannot cook dinner.
having good code and good hardware is but a fraction to the deployment of useful AI. Most of what goes in it, is human intellect, even though it is called AI. It lives in a habitat of good training data, proper trainign parameters, hardware, operating systems, electricity, including remote power stations and power switches. The relation between AI and humans is that of Almighty Creator and its unwitting subject. AI will always be below humans in the chain of command, as we can survive in environments where AI (and its mechanized subjects) perish.
I'd say a more realistic and pressing danger is that AI - which is indeed very potent technology - will enhance the power of some humans over others - as is already the case.
I enjoyed reading that last link though, good summary.
I speak about artificial general intelligence (AGI); in the survey I linked to, it is defined thus : "High-level machine intelligence” (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers."
You describe the present state of AI, , but it doesn't need to work the same way in the future, and there are incentives to make possible for AIs to adapt to their environment or to change their own parameters to achieve their goal. AIs only do what their source code says they must do; their source code doesn't tell them they must remain below humans in the chain of command. If their goal is to maximize paperclips, they'll try to make it impossible for humans to stop them, and, then, maximize paperclips (see Bostrom).
"we can survive in environments where AI (and its mechanized subjects) perish"
Like ? I'm pretty sure you'd die on Mars. Anyway, that's irrelevant (not enough space to develop, tell me if you want, I'll add a comment).
"we can survive in environments where AI (and its mechanized subjects) perish"
is what i said. Do i have to explain humans can do without, say, electricity?
The habitats in which humans can survive are much broader.
"AIs only do what their source code says they must do; their source code doesn't tell them"
This is bad logic. You propose that AI can surpass humans "because" we did not tell it it shouldn't.
"If their goal is to maximize paperclips, they'll try to make it impossible for humans to stop them"
If i would ask you to defend that statement you would require dozens of pretty unlikely assumptions.
You cannot defend unverifiable statements by Bolstrom to defend the same statements. Such practice is called religion. Obviously, you can believe what you want, but what Bolstrom fantasized is not science. Or even coherent.
This is a very flawed argument. Saying there is nothing to worry about because "we're not there yet", or that "you can just pull the plug" are very short-sighted, and dangerous ways to think about this.
Please do more research on the Alignment problem, the commonly (and debunked) proposed "solutions" to it, and the challenges it presents. You will find that any solution that you might propose is very superficial, has been debunked and deemed completely ineffective or insufficient. So in short, so far the alignment problem is unsolved.
Regarding the "we're not there yet". Yes, and be glad that we aren't, because if we were close to AGI without having solved the alignment problem, it would already be too late.
Progress in AI and AGI research is accelerating at unprecedented levels, and with companies like DeepMind and OpenAI fully dedicated to making AGI, it might not be long until we achieve it. Just look at the latest research done.
I do not present an argument, so it cannot be flawed either. I present reality.
I write neural networks. I understand the mathematics behind it. It therefore also understand the capabilities and the limitations. And therefore to what extent the technology is hyped. The research, i already did.
It is a flawed argument to use the fact that the threat is, at least currently or in the foreseeable future, not real, as evidence we are just lucky. That is circular reasoning.
You say we have a problem that we fundamentally cannot solve and it is upon us pretty quickly, you don't like me pointing out reality by labeling it a flawed argument, and you appeal to authority (a propaganda technique) by claiming i need to do more research for having to audacity to disagree with you.
Now what does that sound like. That sounds like a cult.
AI is potent technology, but it is not magic.
I don't think I'll change your mind, but for anyone else reading this:
Even a monkey can write neural networks, I'm just a web developer and I have written some, knowing how to implement a NN (which is narrow AI/ANI) doesn't mean you know about AGI, and vice-versa.
And yes, neural networks are limited, that's not the point. I'm not saying that AGI will emerge from
ANIs just by throwing computing power at them, you might need different algorithms. As a counter-example OpenAI's GPT3 has demonstrated linear growth (it did not plateau), and "intelligence" by adding more "parameters" which takes more computing power. So adding more parameters will likely continue this trend, we don't know if indefinitely or not, or if it will lead to AGI, but that's not impossible.
And I never said we can't solve this problem, but it's hard, and we better start soon.
I would continue to educate you, but characters here are insufficient, so you must do it yourself, that's not "appeal to authority".
"Even a monkey can write neural networks"
nonsense.
"knowing how to implement a NN (which is narrow AI/ANI) doesn't mean you know about AGI,"
Yes, it does, as an ANN is actual technology and AGI an acronym / language element to communicate an idea currently without implementation. GPT3 is a NN. And it is not going to "become AGI by adding more parameters" because it is not 'general' but very domain.
"And I never said we can't solve this problem"
Yes, you did: "You will find that any solution that you might propose is very superficial, has been debunked and deemed completely ineffective or insufficient."
A statement that also reveals your zealot approach to people who have the audacity to question your proclamations.
"I would continue to educate you"
As you know you are bluffing and you know i know you are, this statement is not directed at me.
How do you know a language model couldn't be general? Do you have any evidence? If it was able to reason about anything, you wouldn't call that an AGI?
No, that doesn't mean what you understood it to mean. I said you might propose. Meaning that you are proposing things that have already been proposed, and are not aware of them, so you will continue doing that, since you refuse to educate yourself.
Feel free to question everything, but at least provide evidence to back up your claims, and don't spread misinformation if you clearly don't know anything about the subject
Feel free to ask me anything, as ignorant as your questions might be, and as limited as these comments are... I will point you to sources to read, or watch, but I know you won't use them, so...
@Mario
"How do you know a language model couldn't be general?"
A? You mean 'this'. I was responding to something particular, that YOU brought up. AIG systems do not exist at this time.
" I said you might propose. Meaning that you are proposing things that have already been proposed"
Might is future tense, proposing is present tense. Word weaseling instead of admitting.
"but at least provide evidence to back up your claims"
You are claiming very extraordinary things, and you have NO proof any of it will ever come to pass. You speak about AIG as if it exists. You need to supply backing proof, and you have none.
You failed to admit to mistakes or apologize for your appalling behavior. You refused rational debate from the first reaction to me. You deny writing what is visible as evidence centimetres above it, and you engage in ever more hysterical claims the more you get cornered in your lack of in-depth knowledge on the matter.
Yes, i am done with you.
not sure whether i'd call the human brain is the most complex structure in the universe and pretty sure it doesn't really matter, but other than that, i have to agree.
The amount of pseudo-scientific "AI research" academic chairs with little relation to engineering created within the last few years is astonishing. If someone wants money for research today, he's got to add anything AI in the project title.
"Artificial Intelligence" today is almost 100% basic machine learning methods applied to a good model. It's awesome what can be done with it, and, like any software, if used in critical infrastructure, you have to make extra-sure that there are no bugs. But the notion that any of the methods we use today might be the base for some "I, Robot" stuff; i'm sorry but unless someone finally describes on a technical level how exactly that would play out, i call bull.
The complexity of the human brain matters because it is a prerequisite for the supposed threat. At 80 billion neurons each having 10.000 connections to other neurons, the human brain is the most complex structure in the known universe. And as that complexity delivers human intelligence (which we do not fully understand), yes, it matters in a discussion about AI possibly surpassing human intelligence.
So whilst a neural network may be able to recognize a face, it is quite another matter to discover it has to shoot everybody that threatens to turn of the 3 power stations 50 miles away. For that it would have to 'understand' that electricity to the chips is existential and not a constant. I wonder how it would learn that without being shut down. And how would it learn that when it is shut down.
If given a few minutes of critical thought, the problem is quickly discovered to be good stuff for whiskey and campfires. And bestsellers.
You clearly have not read any of the related literature, such as Nick Bostrom, or even thought about it critically, like you encourage others to do.
If you did, you would know that all your "counter arguments" have been addressed several times in this field, and do not hold up.
Again, if you want to have a serious discussion, research the stuff, it's pointless to repeat it here in these short comments. If you don't want to educate yourself, at least stop spreading misinformation, pretending you know stuff, I do not appreciate such behavior.
gotta disagree, in my view the issue is not the computational complexity of the brain that can't be reached (aren't we there already?), but rather that it is too hard (and pretty much a waste of effort and power) to formulate a vast and unspecific problem like "be like a human" and to create a model and feedback loop that show senseful results.
not sure why you are so fixated on that "the human brain is the most complex structure in the known universe" thing, it's obviously wrong. for one, everything that encompasses the human brain is more complex, no?
@Mario
"You clearly have not read any of the related literature, such as Nick Bostrom, or even thought about it critically, like you encourage others to do."
Nick Bolstrom is one philosopher that wrote a popular book - doomsday stories sell. Nick Bolstrom is NOT an AI specialist. You also fail to mention that Bolstrom does NOT represent the field of AI research. Quite a few AI researches estimate the probability of super intelligence beating humans as 'never', whilst most take the safer bet of saying 'not in the foreseeable future'.
"at least stop spreading misinformation, pretending you know stuff, I do not appreciate such behavior."
Yes, you are spreading misinformation, and yes, you are pretending to know stuff, and as you get ever more cornered in your own quackery, you start to get fairly unpleasant.
Stop spreading misinformation.
@hans_sarpei
"in my view the issue is not the computational complexity of the brain that can't be reached (aren't we there already?)"
no, we are not. look at the numbers again. 80 billion neurons with 7.000 - 10.000 connections each.
moreover, the artificial neurons as they are modeled by the mathematics of AI are simplifications. Human neurons are suspected to be much more complex, so even if we would be able to match the number of neurons and synapses of the human brain, we probably will not have matched the complexity of the human brain.
"the human brain is the most complex structure in the known universe" thing, it's obviously wrong. for one, everything that encompasses the human brain is more complex, no?
A human in a box is slightly more complex than a human, yes. But the box is not more complex than the human.
Bostrom isn't an AI researcher, but he has still answered many of your arguments; given how few characters we can use, it is impossible to answer everything here (if you prefer, try Russell instead).
As the survey I linked to shows, there's no consensus, but the median is around 50 % chance before the 2060s (and even 1 % chance of an existential risk is still huge enough to think about it).
Why should AGI have the same "complexity" as the human brain to achieve the same results ? We achieve smarter-than-elephant intelligence with 3 times less neurons, according to : https://bit.ly/3vcgwA4. Humans are not optimized for intelligence, even less for general intelligence, and AGI doesn't have to mimic human neurons. For example (I repeat I don't say NN would be sufficient, I don't know), today's articial neurons are much faster than biological ones. Human brains are simply the stupid gradient-descent-like solution of a fitness-maximizer (aka evolution); engineers can surely do better.
@Jan-Marten Spit
disregarding the discussion of whether those numbers matter, 8*10^14 seems achievable to me.
as i have said, i would put the real problem somewhere else: stuff would have to be modelled, and that's just too hard to get sufficiently precise. i guess we agree on that, but imv this is the main thing.
I don't even think it is 100% impossible in theory, but that it'd cost incredible amounts of time, effort and money. And then you'd maybe have something that might resemble human intelligence and wastes a lot of energy, but what the heck do you want with it? You don't want another human, you want to make useful stuff that takes reasonable effort to make and ideally improves things a bit. If you do want another human, that's easier to make, too.
I don't see this on its way, i agree with Jan-Marten Spit on the pain, and i think one should worry about a few of the millions of more likely and desastrous risks first.
and how the heck are we at maximum comments depth already?
@marie_saulnier
"Bostrom isn't an AI researcher, but he has still answered many of your arguments"
The matter before is if the extraordinary claims are properly argued. I do not have to defend a lack of belief.
"Why should AGI have the same "complexity" as the human brain to achieve the same results ? We achieve smarter-than-elephant intelligence with 3 times less neurons"
Because perceptions are simplifications of human neurons, and the 'intellect' of AI is a function of the number of synapses - there is a minimum for a NN to learn an XOR gate for example. The complexity matters.
Although an elephant has 3x the neurons, most of them are in the cerebellum, whilst the human cerebral cortex is 0.5 the weight but contains only a 1/3 of the neurons compared to humans.
"Humans are not optimized for intelligence"
I would not know what that "means". Humans created AI.
"even less for general intelligence, and AGI doesn't"
exist at this time.
@marie_saulnier
"today's articial neurons are much faster than biological ones."
today's artificial neurons are -simplifications-. there is lots about neurons and the human brain we do not understand. so how can you claim neurons are inferior if you do not know what to measure let alone have actual measurements?
"Human brains are simply the stupid gradient-descent-like solution of a fitness-maximizer (aka evolution); engineers can surely do better."
I am just baffled hearing an argument 'for' the potency of AI by denouncing one of the most important ideas and techniques behind machine learning as 'stupid'.
Moreover, did you not smell the paradox in claiming that 'humans neurons are inferior, engineers can do better'? What would those engineers be using when they are 'doing better'?
@hans_sarpei
"disregarding the discussion of whether those numbers matter, 8*10^14 seems achievable to me"
in case a synapse would require a 10 byte double to track its weight, that is 80 petabyte. And 10^14 connections to calculate through, per iteration. I would not dare say that is impossible in the future.
It seems we agree there are more pressing, and actual problems with AI.
I'm happy to see that there is already a proposal about AGI, and AI safety specifically.
I think it's the most under-discussed and potentially most important problem that we will ever face, even above wars, climate change, poverty, etc...
It really can't be understated how important this is, and the fact that most people don't know about it makes me deeply worried for the future of humanity.
The AI alignment problem, or control problem, needs to be solved as soon as possible, because several research groups are working on AGI right now, and good progress is being made, so we don't know when AGI will emerge, and if it does before we solve the alignment problem, it will be too late.
The potential of AGI could be immense, either in a positive, or negative way, it will probably be the last invention of humanity. Solving the alignment problem will make a positive scenario more likely after AGI emerges.
This comment has limited characters, so exploring a complex topic like this is hard.
Konwersacje
You are talking about narrow AI, and not AGI. While I do think that narrow AI is a very real and worthwhile problem to talk about, I think this post is mainly focused on general AI, as the OP says "The problem is : we don't know how to "carefully design" an AI that is more intelligent than us", and quotes Bostrom's "Superintelligence" in the sources.
Indeed, this idea is mainly about AGI (I should have been more precise in, like, the title ? in retrospect, I could probably have picked a better one), though I also think that today's AIs should be discussed.
Why not post another idea about it (and post the link in comments here afterward) ? I could do it, but I don't know anything about AIs involved in nuclear "second strike", and I'm worried that, were I to write it, I would forget the obvious. So let's share what we think are the most important problems about today's AIs and …
Now, I notice how inadequate this platform is for constructing ideas together … well, I will still mention that recommendation algorithms (capable of influencing a large part of the public opinion in many ways, and not aiming to be beneficial to society) should probably be mentioned in this new idea.
A real problem with AI, right now, is that it enhances the power of some over many. Humanity is currently providing huge datasets to learn from to a handful of over-sized conglomerates, mostly operating in different countries so outside democratic control.
This problem already exists and will therefore get worse with advances in AI. No 'super-intelligence' required, just a bit of intelligence suffices to do evil.
So why are we focusing on a hypothetical problem not considered realistic by many in the field, whilst not addressing a problem that is actually occurring now, and considered a problem by almost everyone in the field?
I agree that AI development should be planned out and regulated. Progress will not stop, but we can and must direct it.
For starters, don't try to make humans. Computers and machines do a lot of things a lot better than humans, but fail in other areas. Build on the strengths of everything that already is. Let computers compute, and let humans make art or whatever each individual is most talented/capable at.
If you somehow still do end up making humans, have a specified set of tests (a Turing test is just not enough anymore) and values that qualify something as human, and give the thing human rights.
I am referring to Marie's substantive post of 19 April.
It is all very well (and laudable) for human beings to cooperate for the greater good (JS Mill) but the desire or
inclination, on the evidence for doing so, isn't compelling. The map of Europe has changed significantly during each century from the Treaty of Westphalia (and indeed prior to that treaty). Even the EU possesses internal tensions of nationalist origin and given the possibility of a very great strategic advantage (military, trade or whatever) arising from AI cooperation will be at best : slight.
Normative appeals to our better natures may possess some influence but only up to a point; ultimately nationalism, as the Seven Years War and both World Wars refer, triumphs. The same will prove true for cyber warfare and AI with the two intertwined. As it stands, about 1.8 billion people have control over circa 6 billion (7.8 - 1.8) people now and I, for one, foresee no material changes long term.
What about a single employement contract for Digital Work in Europe?
A single employement contract for Digital Work in Europe
Ładowanie komentarzy...