Thursday, June 29, 2023

Alicization

After playing with state of the art AI models (i.e. GPT4), and having somewhat understood the ways they train such models, I have now come to the conclusion that it is impossible to create super-human intelligence based on existing human data.

As a side note, it is probably possible to make an AI that has attributes that almost reach the level of world class experts, but preparing such training data would be very difficult. And the process would probably be the equivalent of employing a world class expert to train the model as if rearing a child. You need to "teach" it things. Preferably interactively, with thoughtful feedback (RLHF). It would be labor intensive, and there's still no guarantee process would be good enough. At some point one would wonder whether raising a real child instead of an AI child is better use of resources.

But no, the problem with super-human intelligence is not only the quality of the training we give it, but that we don't really have a way to define the optimizing function.  Unlike games like chess or go, where there is a well defined ordering of intelligence, in "general intelligence" we have no idea what to look for, which is why the best we can do today is to ask the language model to do "something like this" -- and feed it the best data that humanity can offer. But that would just result in human-level intelligence at best, not superhuman.

Of course, we can always take the simulated game approach -- for example, if we want the best politician, we can create elaborate simulations of the world, and use the same simulated self-play strategies as chess or go to make an AI that is better than what existing human data allows. But to create a simulated world is going to take a unimaginably huge amount of resources. And even worse, we often don't really have an objective "intelligence" function to optimize -- we don't really know what intelligence is.


If we can't tell, for example, whether Shakespeare is more intelligent than Bach, how do we make an AI better than both Shakespeare and Bach? If somebody managed to make one, can we really tell it is better?

In the path to so called "Artificial General Intelligence (AGI)", we've solved the problem of computation through Moore's law, we've solved the problem of gathering data through the internet. We've proved that once we solved these problems, we can create human-alike AIs.  But to create super-human AIs, we need to *recognize* them as better. We have no way to do so. In fact, we *know* for a fact that we haven't even fully solved the problem of recognizing "human intelligence". The software industry is still debating how to evaluate a person's programming abilities. Schools are still struggling to define objective criteria for assessing academic performance. Smart people aren't recognized by fools to be smart. The bestest AI might sound like a madman to most people. Even if we magically created a "superhuman" AI, it's a virtual certainty that many people won't recognize it as such -- in which case you'd ask, is it really superhuman if nobody thinks it is?

We can't imagine inventing cars. We're all actually just asking for "faster horses". AI will definitely get faster, and maybe that is "superhuman" in some sense. But if you wanted some mind-blowing insights on deep philosophical questions? First, you'd have to be smart enough to recognize the best answer.

Intelligence is not a "NP" problem, there's no general "easy" way to give a text an intelligence score or rank. The Turing completeness of natural language makes generation and validation generally as hard. Intuitively it feels to me that the "human life" subsets of language generation and validation are no different.

Wednesday, June 28, 2023

Life is not a story

There is probably a significant difference between living one's life and living one's life as a story.

What makes great experiences in a real life, don't necessarily make a good story.

Yet humans are suckers for stories. We crave them, and we do everything in our power to get more stories, and to share more stories.

The urge is so strong that some people live their own life as a story.

(The fact that Instagram has a feature of the same name is no coincidence)

What makes great experiences in a real life, don't necessarily make a good story.

A good story has a plot, an overarching purpose, a theme, a narrative. It has drama. It has action. It has presuppositions about who the protagonists are, and everybody is a player in a role.

But life is messy. Life doesn't care about narratives. Life doesn't have an overarching, consistent plot. Life is about experiences. Life is about personal expression. There is a freedom in life that doesn't come with a rigid storytelling framework.

So when people live their life like a story, personal needs gives way to the needs of storytelling, and priorities are misaligned. Happiness obviously fades. But perhaps more importantly, in a subjective framework, personal misalignment is cosmic misalignment.

This is my hypothesis to the question of the difference between greed, fantasy, and asking for what one needs. What we need is what we need, not what an imagined story of me as a protagonist needs. What's why winning the lottery is (usually) "greed" -- it makes for a great "story" (esp. in the instagram sense), but it's often what a person really needs. (for starters, money is most often a means to an end, not the main goal itself)


Tuesday, June 27, 2023

Latency

 > 通常用意識引導個夢會有少少 delay。

我最近有個 hypothesis 就係「現實世界」係一個 delay 以月同年計嘅夢境嚟。

Tuesday, June 13, 2023

9guesses for AI

 Let's see whether these pan out -

- Large language models based on transformers are going to reach human-level performance as a commodity in ~2 years. However, they will hit various roadblocks in making super-human intelligence in terms of reasoning. (philosophically, because you can't just feed a neural network data and expect it to be more intelligent than that data - and even if you accidentally create a smarter AI, you just won't recognize it as smarter.)
- Armed with knowledge that once you get enough data anything is learnable by machines using simple architectures, smaller models in various niches will become the new thing. They won't be on the headlines, but they will contribute to technological progress.
- Regulation frameworks on larger, general purpose models will be crazy. It will be like GDPR -- bigger players will just pretend to comply and ignore the actual law when it's inconvenient for them to do so.

Subjective Truth implies Free Will

"When you know the future perfectly, it's already passed." - Alan Watts

A corollary: Once you know what decision you will make, you have already made the decision.

Subjective Truth implies that if you don't know something, the state is undetermined, and "not knowing which universe you are in" is equivalent to the unknown states being undetermined in reality.

So under this context, making a decision is an act of discovering which universe you are in. For any unknown fact about your self, it is always possible to discover it eventually. As such, subjective truth alone implies free will.

And probably, objective truth implies determinism unless you do a bunch of complex mental acrobatics. But still, I *think* free will is compatible with determinism as long as you have a good definition of "self".


The more interesting question is why shouldn't this argument extend to not only the self, but to everything, to all that exists?

Now this is a deep one.


(Note on 2023-07-07: the argument seems wrong. But still definitely valuable as salvageable material)

Monday, June 12, 2023

因果

阿呆講起,諗起之前朱老總唯一一次邀我寫稿,我見個題目係「因果」,立即拒絕咗。

太難寫喇,就算毫無保留咁用啲難入口嘅概念同專問術語(例如「主觀事實」)去講,我都未必寫得到,何況係啲面向大眾嘅嘢。

見阿呆講起《大隻佬》先至知道原來當年嗰套戲講啲咁深嘅嘢。咪就係就算用啲咁易入口嘅手法,大家都睇唔撚明,明咗都唔肯接受。(現代科技昌明,我直接去咗 youtube 睇高人解故 yay)





不過就算冇任何大眾可以consume嘅嘢,我諗都係講下呢十幾年嚟諗過啲乜。 (我記得2009 年左右開始諗呢樣嘢,寫過喺私人  blog 度但啲 data 去咗邊呢?)


  • 因果本身其實真係講緊最普通嘅、現代科學成日用嘅 cause and effect
  • 因為現代科學習慣咗立竿見影嘅效果,所以佢承認嘅 cause and effect 就只有啲最簡單最唯物主義嗰啲,即係物理科入面教嗰類
  • 傳統智慧講嘅因果函蓋多啲,主要係*統計上*可認證但實驗室好難反覆實驗嘅嘢。例如情緒就好易有呢啲 feedback loop。情緒引致行為,行為又引發同類情緒。一啲比較穩定咗嘅「因果」我哋會叫做「文化」「傳承」。都係差唔多嘢嚟。呢啲嘢用唯物客觀角度都解釋到,雖然我覺得啲效果通常比一般唯物理論仲要強少少,攞埋啲咩集體潛意識嚟助陣先至夠,不過呢個唔太影響結論。
  • 其實呢啲嘢雖然現代科學唔鍾意講,統計上好難準確量化,但只要稍為有人生閱歷嘅都知道有啲嘢有隱若嘅因果關係,有 ripple effect,同埋啲因同果嘅性質好多時都保留到。呢啲嘢體驗過同有留意開就明,但憑空好難解釋。好籠統咁講就變咗做「善」同「惡」。(可能亦係由於「善因」好難變成「惡果」,反之亦然。)
  • 由於大眾大多數都係白痴,啲咁科學嘅理論好難完整傳達,所以去到最後就變咗最俗套嘅「做壞事會有報應」。
  • 「做壞事會有報應」其實大概都真真哋,不過通常信嘅人都會錯判咩係「壞事」,同埋明明「報應」咗佢都未必睇得出。
  • 「報應」基本上係主觀事實嚟,天理唔同人間法律,唔一定係 seen (by on9 humans) to be done 嘅。天道本身唔理世俗嘅善惡(按:如果有個人樣嘅神同你講嘅道理最多都係一半天道,佢有人嘅特質就代表佢都受人間道所影響),善惡本身亦係人類自己發明嘅主觀事實,所以「報應」(如有)嘅操作原理,都只不過係主觀嘅 attraction 同 manifestation ,就好似你痾咗督屎就會臭,而睇唔透天道嘅人會以為自己行開兩步掩埋口鼻就聞唔到。
  • 另外從某種角度做壞事嗰下已經係報應嚟,用屎做例子就係「痾屎嗰陣已經要承受臭味」。(大部份 on9 大眾係 *冇本事* 做真正嘅壞事嘅,所以佢哋唔知做壞事嗰下都已經報得幾應下。至於 enjoy 做所謂「壞事」嘅人因為心入面冇乜惡念,所以佢哋又未必有咩嘢)
  • 又另外順帶一提想其他人有惡報(不論佢係咪「應得」)肯定就係惡念嚟。我唔明點解啲人會咁on9連呢樣嘢都睇唔清。
  • 咁痾咗嘅屎唔只係自己受,大家都會聞到。所以就有所謂「日本兵殺咗人, 李鳳儀就要死」。(用屎嘅例子就係:仆街仔痾咗一督屎,廁所就會臭)
  • 有時啲屎沖咗去好遠嘅地方,痾嘅人自己聞唔到,就會有種「善惡因果唔靈驗」嘅錯覺。但呢個世界係一體嚟,就正如唔會話「屎忽痾咗屎,要個鼻聞到臭味,好唔公平」。現代社會太過個人主義,大家真係以為皮膚以外就係另一個世界,但科學話我哋知根本物理學係冇呢條界線。一個人做咗壞事,個族群受到傷害, 都係一種「報應」嚟,同「屎忽痾咗屎,要個鼻聞到臭味」性質上係一樣。
  • 至於前世今生講嗰啲因果,根本一派胡言,除非你話世界上所有嘢都係我嘅前世今世下世。不過隨便喺個 database 度搵啲衰嘢出嚟話係關我事,同老屈冇分別。老屈同硬食有分別嘅,硬食係明明唔直接關你事但你要執咗嚿屎,老屈係話係你痾嘅唔該你執返佢。
  • 順帶一提我真係覺得我未必一定要為所謂上一秒嘅自己負責。呢個問題都困繞咗我好耐,法律上(香港同好多地方)刑事責任係冇限期嘅,如果你五十年前犯咗法,五十年後捉到你兼且有足夠證據嘅話你照樣要受刑責。但兩個人除咗有一個疑似連貫嘅歷史傳承之外就冇咩關係,唔通個罪喺記憶入面? (但我又講唔出話你上一秒犯咗法,下一秒都唔應該受罰⋯)
  • 連今世嘅因果報應如果拖長時間有時都有啲怪,仲講前世唔係老屈仲可以係乜?
上面就係啲老土嘢,下面再講啲實驗性質嘅諗法
  • 因果其實未必存在,因為首先你要真心相信時間係存在嘅。
  • 就算你相信時間存在,都未必需要任何因果關係。
  • 客觀唯物世界只係純粹咁存在。你相信有時間嘅,世界咪就係每一個時間點,有啲嘢以某個樣式存在。邊個話啲時間點之間嘅嘢有關連?
  • 我哋望到啲嘢喺時間推演之下好似有連貫性,有關係,純粹係我哋想令世界嘅現像 make sense ,所以將啲佢識得歸納嘅 pattern 叫做因果。
  • 另一個角度講,如果我係一個全知嘅神,我係唔需要理咩因果嘅。你想知一個時間點發生啲乜,咪喺個 database 度攞返嗰個時間點嘅資料。 係我哋人類冇全知嘅能力,所以先至要靠「因果」去推演返宇宙嘅定理出嚟,方便我哋理解世界,增加生存機會。
  • 喺客觀唯物世界入面,全知者只能夠講上一秒嘅因係整個宇宙嘅狀態,下一秒嘅果亦都係整個宇宙嘅狀態。呢個先至係 full picture。
  • 所以「因果」係資訊不足嘅情況下,我哋攞我哋覺得最重要嘅資料,歸納一啲有重覆性嘅現像,然後粗略估計返之後會發生咩事。
  • 呢個歸納因果定率嘅過程係粗疏兼且主觀嘅,所以同一個現像可以歸納出各種唔同嘅因同果。(可能因為咁,所以精密到 quantum scale 就被迫要講概率... ? 利申我如果認真諗多幾日可能會有答案)
  • 通常啲物理公式(?)以外嘅因果係講究「故事性」嘅。通常係要重申一啲「文化上被接受嘅因果關係」。反而其因果關係嘅緊密程度未必係重點。例如「努力溫咗書考試就會高分」姑且係一個疑似 valid 嘅因果關係,但佢之所以成日俾老母s提及,唔係因為佢因果關係緊密,而係呢樣係一種文化上被認同嘅嘢。事實上「聰明就會考得高分」可能因果關係更密切,但就比較少人講。再極端啲「好彩撞中晒答案就會考得高分」更加係無懈可擊,但啲人聽咗就會覺得似講廢話。(係咪由於「努力」比「好彩撞中答案」更加 actionable 呢?我又覺得未必。提升運氣都有唔少方法,不過我哋文化上唔係好鍾意講呢啲⋯ :0) )
  • 我諗要再寫多兩句解釋「故事性」咩意思。英文係 moral of the story,例如三隻小豬嘅故事教你 eh... 唔知教你啲乜咁。個重點唔係個因果關係喺統計學上有幾緊密,而係你攞呢個因果關係寫個童話故事出嚟賣到幾多本書。好賣嗰啲就有「故事性」喇。
  • (題外話:咁物理公式係咪都係純粹因為某種「故事性」先至會被人類歸納出嚟嘅因果關係呢?呢個問題隨時係一個 CLS 深坑嚟⋯)
  • 有樣最 on9 嘅嘢,就係望住個結果,然後嘗試搵返一個客覯嘅「因」出嚟。
  • 搵「因」唔係問題,但由於因果係主觀嘅,咁就要睇你為咗啲乜去搵個「因」。例如「個廁所好臭」,你目的係解釋俾一歲 BB 聽就會話係「因為爸爸痾咗屎」;你想改善之後屋企環境冇咁臭就會話係「因為冇開抽氣扇」或者「因為冇用香薰精油,不如買返啲」;你想 shame 9 屋企人或者撩交嗌可能會話「咪就係因為你唔喺公司痾埋啲屎先返嚟,係都要返到屋企先痾囉!」
  • 常言道 causation is not correlation,但其實有個好大嘅問題係:基本上冇一個百分百準確嘅方法分辨呢兩樣嘢。(warning: 呢個係哲學深坑,下面只係淺嚐臭屎)
  • 最常見嘅例子「雪糕同罪案」我哋「知道」兩者關係係 correlation 唔係 cause,純粹係因為我哋認定咗「兩樣都有共同嘅因 i.e. 氣溫」。呢個解釋嘅「故事性」十足,不過我哋點樣確實知道「雪糕*真係*唔係罪案上升嘅原因」呢?教科書會話「你禁咗雪糕,再睇下罪案率有冇變咪知囉」。呢個都係一個「故事性」十足嘅解釋嚟,因為就算罪案冇變,你點知係咪「禁雪糕」呢個行為本身令罪案率上升,offset 咗「少咗雪糕」呢個令人犯罪嘅原素呢?(我都可以作故仔㗎:嘩,禁咗雪糕,我冇啖好食,好大壓力呀!唔得,出街呃返兩個阿婆平衡心理先!)
  • 因果嘅問題仍然係講緊宇宙渾然一體,歸根究底你係冇得「客觀」咁切割咗佢話 part A 導致 part B。你可以做嘅只可以係 *喺某個文化脈絡底下主觀咁相信某種有故事性嘅因果關係*。
  • 所以如果上面個 point 係真,咁理論上一個完全唔同文化背景嘅人(或any物體)可以用一啲我哋完全唔明嘅因果定律去預測一啲我哋完全估唔到嘅嘢。亦即係「因為日本兵殺咗人,所以李鳳儀就要死」嘅格式嘅嘢。(btw 寫到呢度忽然發現有個吐糟位,明明任何人都難免一死,你話「因為隻嗚鴉講咗句粗口所以希特拉要死」都啱㗎,一定靈驗𠻹。)
  • 唔知啲其他(?)先知嘅預言係咪咁嘅玩法。當然我估佢哋都只係覺得自己將 1+1=2 嘅道理講出嚟,唔會知原來自己講嘅嘢咁奇怪。(事實上你同啲冇發明過數學嘅原始人部落講 1+1=2 佢哋真係未必知你講乜鳩,同佢哋玩 mathematical magic 可能真係會好似真魔法咁神奇 -_- )
  • 可能有人會話:唔使搞到咁神秘,講返個因果定理出嚟,做下實驗,睇下係咪應驗,咪知道佢係咪 9up 囉。 - 但問題係,喺全知嘅角度下,個果同因都係整個宇宙嘅狀態。全知者係知道個果,但未必知點解自己知道個果,更加未必知點樣教識你見到個果。夾硬講,都只會講到「因為日本兵殺咗人,所以李鳳儀就要死」咁嘅格式嘅嘢出嚟。某程度上就好似天才考試,你問佢點考得咁好,佢都只能夠笑笑口話「努力讀書就會考得好」,唔通你真係想聽「因為日本兵殺咗人,所以我考得好」咩?
  • 一個 2022 年嘅好例子就係:某個 large neural network 點解會出某個結果?你作為電腦系統嘅全知者有齊電腦入面所有 neuron 嘅狀態,但你背晒呢啲 neuron 入面嘅數值都冇用㗎,連你自己都唔明,點樣解釋?咁你只能夠 9up 啲嘢:哦,可能因為你個 prompt 寫得唔夠好囉。(dllm 忽然明白啲人走去問佛佗點解佢行街仆親,點解會答因為你前世生得樣衰⋯)
  • 順帶一提,你係冇得反覆驗證「因為日本兵殺咗人,所以某某考試考得好」呢啲generalize唔到嘅因果關係嘅。係咪代表呢啲因果關係「錯」呢?我只可以話喺文化上同「故事性」方面係錯晒,但唔見得有違天道。
  • 講起 generalization,其實都係好有故事性嘅嘢嚟。我哋將「日本兵殺咗人,李鳳儀要死」呢兩個事實擺埋一齊,大家就已經有加配因果關係嘅衝動。然後就開始諗點樣 generalize,例如「因為 {A} {action} 咗 {B}, 所以 {C} 就要 {被 action}」。喂大佬,呢啲係你自己腦補出嚟嘅嘢嚟囉好冇!就算你搵到幾千幾萬個吻合呢個 generalization 嘅例子,都唔代表個 generalization 係啱㗎嘛!宇宙唔單止係渾然一體,宇宙入面每一樣嘢都係獨一無二嘅,點可以隨便用任何嘢去代替?就算「因為日本兵殺咗人,所以李鳳儀考試考第一 」係真,都唔代表你去考會考第一。因為你根本就唔係李鳳儀。呢個世界任何 generalization 本身都係冇根據嘅,所以所有 generalize 到嘅公理其實都係奇跡嚟。呢個世界有啲俾我哋搵到出嚟嘅規律,唔知算係世界嘅奇跡,定係證明我哋有超神奇力量。(歪諾煲乎?)
  • 所以(?!),係,真係所以,「因果」只能夠係「信」嘅。我哋已(?!)知道,只要你足夠「信」個結果就會(高概率?)應驗,而搵返個「因」出嚟係主觀嘅 interpretation。成件事真係靠個「信」字支撐起。

如果要寫篇 proper 嘅文去講上面堆嘢,我諗唯一方法就係用英文寫一次, feed 入去 GPT-4 然後叫佢寫,再譯返做中文⋯⋯

意大利醋豉油

(久違嘅食譜post)

誤打誤撞溝咗啲幾得意嘅嘢出嚟。

- Balsamic Vinegar (要係 City Super 百幾+蚊支意大利出產嘅正宗 Balsamic Vinegar)
- 頭抽豉油
- 鹽若干
- 糖若干
- 我好似落咗少少五香粉

醋同豉油比例可能係 1:1 或者係略少於 1:1 ? 唔記得咗了。鹽要落多啲,唔可以慳,要夠多先可以抗衡啲醋酸味。糖就應該係 to taste。

溝出嚟炒菜調味一流,可以代替豉油,令菜式有啲新鮮感。啲 Vinegar 嗰種複雜味道同豉油嘅 "Umami" 味十分之夾。


------

背景:由於少少懶,所以最近都溝咗一樽仔現成嘅「豉油類調味料」,個底係豉油同糖。個概念係反正每次我都係要豉油同糖,不如撈埋方便啲,等我唔使開咗火先慢慢溝少少出嚟。

咁啱喺 City Super 買咗支 Balsamic Vinegar,咁我就試下用埋佢調味。點知一試不得了,真係「五味雜陳」(in a good way),所以就直接用上面個 formula 代替咗平時嘅「豉油加糖」。暫時應用廣泛,冇試過落完之後覺得違和。

Sunday, June 4, 2023

李泌

https://ctext.org/taiping-guangji/38/zh

唐朝李泌,主職修仙,副職宰相...

後世史料閹咗佢啲神怪事績,資治通鑑話「雖難盡信,亦豈得盡不信!今擇其可信者存之。」

唐代有邊個皇帝冇發動過兵變?

讀歷史嗰陣好多時啲老師會教話唐朝係盛世嚟,但最近睇 HereIsAleph 嘅 youtube 佢整理咗一堆唐朝政治事件,卻令我發現唐朝皇室嘅所作所為雖然比北朝最痴線嘅程度好少少,但其實都一直延續緊南北朝宮庭嘅荒誕。

直至唐朝中期,每代皇帝或太子都總會經歷有一堆違反人倫嘅恐怖宮庭事變。

高祖李淵時期:玄武門之變最出名,李世民殺死兩個兄弟,迫父退位。呢單嘢做壞咗個頭,後代個個都有樣學樣。

太宗李世民時期:原先太子(太宗長子李承乾)同兄弟李祐謀反打算刺殺同母弟,又密某殺死太宗皇帝,最後事發被廢除太子位。

高宗李治/武則天:高宗係第一個冇發動過兵變嘅唐朝皇帝。(下一個應該係去到代宗。 :P) 不過佢同武后單嘢就多八卦喇。武后本身係太宗嘅妃嬪,李治喺太子時期服侍太宗嗰陣認識武則天,佢納武為妃已經算係亂倫。然後佢再同武則天個姐姐韓國夫人玩母女丼。後嚟母女兩人短時間內死亡,世人推測係武后所謀劃。

高宗之後,維基寫嘅傳位順序係:

中宗(則天后稱制) → 睿宗(則天后稱制) → 則天后(武周聖神皇) → 中宗(復辟) → 殤帝 → 睿宗(復辟)

中間有呢啲事件:

  • 神龍革命 https://zh.wikipedia.org/zh-hant/%E7%A5%9E%E9%BE%99%E9%9D%A9%E5%91%BD 
  • 韋后之亂 https://zh.wikipedia.org/zh-hant/%E9%9F%A6%E5%90%8E%E4%B9%8B%E4%B9%B1
  • 重俊之變 https://zh.wikipedia.org/zh-hant/%E9%87%8D%E4%BF%8A%E4%B9%8B%E8%AE%8A
  • 唐隆之變 https://zh.wikipedia.org/zh-hant/%E5%94%90%E9%9A%86%E4%B9%8B%E8%AE%8A
  • 先天之變 https://zh.wikipedia.org/zh-hant/%E5%85%88%E5%A4%A9%E4%B9%8B%E8%AE%8A
只不過二十年就發生咁多政變,仲未計武后篡咗兩個仔嘅位自己做女皇呢單嘢。

玄宗李隆基時期:玄宗本人發動咗兩次政變,唐隆同先天政變(見上)。但佢最出名當然寵幸楊貴妃,本身楊貴妃係玄宗個仔嘅老婆,佢夾硬搶咗佢做貴妃。後嚟天寶年間發生嘅嘢,最多話佢做皇帝做得太垃圾,反而未算係有傷人倫。呢個時間點係幾關鍵,可能因為皇帝有返人性,國家就反而開始衰敗。(見下討論)

肅宗李亨:肅宗呢單嘢就有少少搞笑,雖然仍然係太子篡位嘅劇本,最後卻有大團圓結局嘅錯覺。安史之亂之後,父子兩人一南一北走難,玄宗喺四川避世,肅宗就帶住小貓三四隻走往大西北老本營,連結回紇,喺靈州稱帝反攻長安。不過細心諗諗成件事都好詭異,玄宗喺四川嘅通訊極差,正所謂「蜀道之難,難於上青天」,由大西北通訊到成都幾個星期,好明顯肅宗登基嘅時候,玄宗係「被通知」嘅。後來肅宗光復兩京,迎接玄宗返長安,兩個人仲做咗個好溫馨嘅show,大團圓BBQ結局。我估其實史書刪走咗好多父子矛盾嘅暗湧,玄宗應該都係見大勢已去爭都無謂先俾面個仔乖乖做太上皇,據聞佢晚年都唔多開心。

另外肅宗時期仲有永王李璘謀反單案。話說安史之亂仲係爆得如火如荼,玄宗本身派永王李璘去江凌鎮守,但佢唔聽新皇帝肅宗指點,然後肅宗勢力好快就將佢殺死。呢件事牽涉到李白,原來咩「朝辭白帝彩雲間千里江陵一日還」就係講佢流放途中天下大赦所以開心到寫咗首咁輕快嘅詩。咁究竟呢單謀反案係邊個搞出嚟呢? 呢條 link https://www.sohu.com/a/469971569_120983970 寫得好搞笑,不過應該係吹大咗(後嚟代宗時期平反咗永王謀反單案,究竟係點解呢?)。總之,安史之亂未平息,太子自己喺大西北單方面宣告登基,另一個皇子企圖割據江南,仍然離唔開自初唐一直發生嘅父子相迫兄弟相殘嘅劇本。不過已經係接近尾聲。

中後唐啲嘢我完全唔識,睇維基見到啲倫常慘案少咗,啲皇帝好多早死,未立繼嗣,宦官專政多咗。基本上個劇本似係轉咗做宮中宦官殺死舊皇帝擁立新皇帝。睇返唐朝由盛轉衰,佢哋早期之所以能夠締造盛世嘅秘密,好可能就係皇子靠武力迫宮嘅「習俗」。呢樣嘢雖然係有違人倫,但起碼保證佢有足夠野心同實力去做帝王。唐朝中衰唔係因為李隆基搞咗兩場政變,反而係因為佢後期天下太平,軍事能力生咗銹,搞到安祿山打到京城都無力還手。去到最後仍然係靠皇太子(甚至係太孫)光復國家。到肅宗之後,啲皇子已經唔興玩兵變,最後權力中心去咗宦官手上,唐朝就走上咗末路。

去到宋朝,啲皇帝又係唔鍾意冒險,國家太平無事,人哋金國一隊騎兵揮軍南下就滅國。所以荒誕還荒誕,如果國家最高機關嘅人冇足夠狼子野心,冇基本嘅軍事知識,始終係好容易俾人一擊即破。

Friday, June 2, 2023

Random thoughts

People try to mask negative intentions with polite words etc, but they get through. Some are obvious in hindsight, others are just somehow felt with a sixth sense. Perhaps that's why some people put on layers upon layers of polite masks, they think they can hide their intentions, and they try harder when they fail, but they don't realize it is a fundamentally futile act.

---

I used to think Gödel was a bit of a freak for his paranoia in his later years.  These days I sympathize. A side effect of general heightened awareness seems to be a proportionally heightened awareness of various tail risks, and the realization that they could become reality if you choose them to be. As possibilities flash before your mind, if one does not have the courage (or recklessness), one would naturally try to eliminate such risks as much as possible. Once that becomes a habit, one becomes even more sensitive to even less probable risks, and hence becomes more and more "paranoid".

---

Are the "10000 hours of intentional practice" thing mostly fueled by the crystallization of one's imagination? The same way placebo effect works? It might be interesting to see what happens with "10000 hours of intense imagination but no practice". The funny thing is, "a couple years" is empirically roughly the median time scale that "dreams come true", or alternatively, mental projections become inter-subjective.

---

There is some funny thing about "attention". Is "attention" related to "choice"? In theory we somehow "choose" what we give "attention"... 

Given what we already know of the purpose of "attention" in Transformer AI architectures... does it make sense that if we shut down the attention mechanism, we actually perceive more of other stuff?

Are there different kinds of attention?

---

There are a bunch of people that, to me, seems to becomes stuck in their own subjective interpretation and reality of the arcane and occult. Making "too many" connections that aren't useful. I can't say I don't feel a primal feeling of "disgust" when listening to them talk -- but when I rationally analyze the situation, I can't think of a good reason to say they're "wrong" -- the only thing I can say is I don't accept that truth. As long as they don't actually try to implement what they believe, they're fine (and given that they're alive, they probably never tried). The question is, are those incoherent *stories*, the coincidental connections, actually useful (for them)? I can't conclusively say they aren't. It's like a functional person that has a side hobby of being delusional. So what if person X believes that Santa Claus is literally a real person that lives in the North Pole and climbs down chimneys? And I guess it's a cautionary tale for us --- a reminder that there are many dimensions of truths, none that are objectively better, and that there's no inherent reason why others would choose yours over theirs. And of course, that being said, don't be that guy.