Thursday, September 28, 2023

perceive

 one needs to perceive the signal without the noise

the way to perceive the signal is to amplify it by positive recursive feedback, but the noise must be lower than the signal for this to work. that is why we reflect on the self. the signal is from the self, because that is all there is.

it may be possible to filter out some noise


center is all

 perhaps you only need to control the center..?

Saturday, September 23, 2023

Second thoughts on Turing Machines and Busy Beavers

Youtube comments on textbook standard BusyBeaver function introduction https://www.youtube.com/watch?v=kmAc1nDizu0 :

Fun fact: there is a finite number of states in the observable universe. You can estimate an upper bound of the number of states of a system as e^S, where S is the entropy of a black hole with the same size as the system. For our universe this ends up being around S=10^123. This means that we could only ever hope to compute up to BusyBeaver(e^10^123), as later busy beaver numbers need computations that our universe is too small to contain.

---

The op proposes to calculate BusyBeaver(e^10^123), so e^10^123 is the number of states of the Turing machine.  Therefore, the entire Universe is used to encode the Turing machine.  The tape is not accounted for.   This is somewhat of a moot point, however, since the tape of a Turing machine is (by definition) supposed to be infinite, so the real Universe couldn't accommodate a Turing machine anyway.

---

As a corollary it means our universe can only make "real" Turing machines up to k states, where BusyBeaver(k) < e^10^123.  If you try to make a Turing machine with k > 1, it will run out of tape for some configurations :) It looks like k is ... 5?


The first two comments are from strangers, but the last one is from me. Pretty sober to realize that Turing Machines with states less than the number of your fingers are not generally physically possible. It really puts a limit on the kinds of infinities we are supposed to imagine in math.

The Turing Machine and Halting Problem really hammer home two things:

- Computation irreducibility, and

- The hardness of differentiating between infinity and very large numbers

In fact, it seems that the busy beavers imply there's actually no difference unless you believe in some omnipotent God, because it's proven that you can't tell the difference once the numbers get sufficiently large.

We know that BB(745) = "infinity" (for integers at least).  So if there's any property of numbers where finite values and infinity have different qualitative properties... the transition is between 0 and BB(745). Or we just hallucinated the properties of infinity.  I tend to believe the latter is true, but maybe numbers become more surreal when they get *really* large.


Btw. Funny thing while looking up info on the subject. So this Adam Yedidia guy is a student of Scott Aaronson and was the first person to prove that BB(k) is independent of ZF for some k, and as I put his name onto Google... LOL














P vs NP, and randomness

Just a bookmark for myself


https://www.quantamagazine.org/complexity-theorys-50-year-journey-to-the-limits-of-knowledge-20230817/


 While fascinating in its own right, cryptography seemed far removed from the self-referential arguments that had first drawn Rudich and Impagliazzo into the field. But as Rudich struggled to understand why the circuit complexity approach had stalled, he began to realize that the two subjects weren’t so far apart after all. The strategy researchers had adopted in their attempts to prove P ≠ NP had a self-defeating character reminiscent of Gödel’s famous proposition “this statement is unprovable” — and cryptography could help explain why. In Russia, Razborov discovered a similar connection around the same time. These were the seeds of the natural proofs barrier.


The tension at the heart of the natural proofs barrier is that the task of distinguishing high-complexity functions from low-complexity ones is similar to the task of distinguishing true randomness from the pseudorandomness used to encrypt messages. We’d like to show that high-complexity functions are categorically different from low-complexity functions, to prove P ≠ NP. But we’d also like for pseudorandomness to be indistinguishable from randomness, to be confident in the security of cryptography. Maybe we can’t have it both ways.

Friday, September 22, 2023

Hype

Looking back, my track record at dodging hype and going all in on technologies that will change the world is still pretty good.

- Dodged the crypto one (maybe unfortunately)

- Dodged the 3d printing one (this one was really stupid TBH)



Tuesday, September 19, 2023

「貪」

綜合各方資訊,修行最忌嘅果然都係唔夠謙遜。

無論係我自己嘅經歷、睇人哋嘅故仔、同埋理論層面,基本上都係指向呢樣嘢。

有時啲嘢係好難避免嘅,喺一個被灌輸唯物主義嘅社會,忽然俾你接觸到宇宙偉大嘅其中一小片面,現代人好容易就覺得件事不可思議,然後就覺得自己非比尋常。接觸到神聖就以為自己係耶穌呢類嘢,聽聞都成日發生。但事實就正如 Alan Watts 所講,其實所有人都係「耶穌」。又正如我所講,聖經一早提咗大家,就算係耶穌被魔鬼誘惑,佢都堅持唔濫用神力,謙虛做人,淨係用力量去救濟窮苦人家。所以如果只係接收到片面嘅神力就以為自己係耶穌然後覺得自己無敵,就肯定出事。

一開始疑似學識力量點使用,就好奇去到盡發晒啲力出去,好似係常見嘅伏位。我覺得有少少「科學精神」去試下嘢真係無可厚非,但事實上啲「魔力」又疑似真係會消牦嘅,一般人唔識發力反而冇事,但識少少唔識收斂就會出事。暫時唔知有啲咩好嘅方法避免問題 (按:我從來都唔信世界上有「貪心」呢樣嘢,「貪」字係世俗嘅嘢嚟。宇宙之大係無法想像嘅,人嘅所謂「貪念」係食唔到佢半分半毫。小器嘅係人,唔係宇宙。) 我現時覺得最接近會work嘅理論係,你只要唔係諗住同宇宙「切割」,就唔會出事。(即係愛人如己,有濟世之心之類)首先人本身唔係一個「獨立個體」而係宇宙萬物入面嘅一樣嘢。通常被歸類為「貪」嘅嘢,只不過係人將「我」同「非我」切割得太樣衰。一般嘅「損人利己」嘅操作,都係某程度上認為「我」先至係「我」,至於宇宙其他萬物,都唔關我事。但宇宙唔係有咩世俗嘅道德判斷話呢啲「貪念」係「惡」,只係呢種切割係切一半唔切一半。真正徹底嘅切割,唔係「只要我有名有利,理得你死」,而係連名利同其他世俗嘢都切晒,完全出世,不問世事。切到不問世事嗰種,反而悠然自得。而「損人利己」嗰種,係以為「錢財名利」同「世上其他人」係唔同嘅嘢嚟,但心水清嘅人就知道,根本上「錢財名利」嘅價值都係其他人賦予嘅,如果你要追求呢啲,就唔可以「理得你死」。真係唔係道德批判,而係邏輯問題。最容易理解係「名」:所謂「名」係其他人嘅仰慕。但如果「理得你死」嘅話,咁你要嘅「名」又係咩嚟?根本唔 make sense。錢都係類似。(經濟學上,我手上嘅一蚊,代表嘅係使用世界上某其他人某時間嘅權利嚟。亦理論上係代表我對世界嘅貢獻嘅價值。所以你手上每一蚊都係連接世界上每一個人嘅因果。)

理論上要避免自己「魔力」消牦得太勁,最好就係唔好自己操勞。道家講究「無為而無不為」,講嘅係順應自然,靠整個宇宙嘅力量去達成件事。至於點樣將你想要嘅嘢變成宇宙想要嘢,呢個就真係考功夫喇。理論上最簡單嘅做法係將自己想要嘅嘢變成宇宙想要嘅嘢,不過大家有自己意志嘅,要唔要開心食屎就自己諗。人生在世最大嘅「恩典」就係自由意志,呢個意志係宇宙從渾然一體之中「切割」出嚟,每個人都可以選擇「我」同「非我」之間係順應定係磨逆。理論上「天道無親」,因為宇宙係包含「我」同「非我」嘅整體,所以我可以想像「佢」會好似 Re-Zero 嘅 Echidna 咁樣用好奇嘅眼神睇下你選擇點樣掙扎。愛就係選擇吖嘛,Echidna 都係追求「愛」,佢自己全知,唯有偷窺其他人嘅選擇感受「愛」。

腦補:「愛は何故、減るのだろうか」就係因為「愛就係選擇,揀咗就冇得再揀」 :0)




Monday, September 18, 2023

concentration of probabilities

諗起都唔止一次密集式撞到人

例如之前嗰晚同某某食完飯或一大班人聚會,第二日就又撞到佢咁。

幾有趣。


老豆以前都會話,啲較少類型嘅症都係平時冇,一嚟就同一日幾個咁。Hong Kali都係咁講。


 唔知有啲咩特別意義,純粹係有趣嘅觀察

Compatible Logic

Perhaps there could be a system of logic where instead of proposition only by deduction, we allow propositions that are compatible.

That way, we don't need axioms. Instead the concept of abundance can be observed, where as long as things don't contradict, they exist and are all true.

It may be the case that such systems require a minimum level of inclusiveness to work. Or perhaps it implies a minimum level of inclusiveness.

An interesting exercise would be to consider how many related things can be put in such a system before you start getting contradictions?

功德

以前我會以為呢啲一派胡言,但經歷過之後先會知道係唯一正道。

至於點解? 可能要等多幾年我先有可能諗到點解..


 

NDE

Some say that "near death experiences" may provide some insight as to what happens after death, while might think that "near death" is not "death" so the "near death" experiences are of little relevance to what actually happens upon "death".

Previously I tended to be of the latter camp, but recently I realized that because death is by definition an irreversible process (if anything, religious dogma stipulates this), it follows from the definition that anyone who is alive must not have gone through true death.

Now, this is just a quirk of semantics, and should not actually inform us of whether NDEs can be viewed as DEs. In particular, if we posit that people can actually return to the living after going through death, at least some NDEs can actually be considered true DEs.

Can people return from death? 

Well. Once you drop the axiom that "one cannot return from death", then "death" is rather hard to define. But it seems quite possible if you read enough stories.  The thing is I'm not sure whether all returns are to the same universe...

Perspective

Is there a way to see the world from without a perspective?

This is a question that ought to be pondered upon.

We always say we ought to be objective, i.e. we should drop our subjective perspectives in favor of a neutral one. But aside of the fact that objectiveness usually means mere inter-subjectiveness (i.e. social niceties), it must be pointed out that we actually do not know how to perceive without a perspective.

That without a perspective is omniscience.

P

美好嘅世界,你好呀

美好世界嘅我,早晨

我哋以滿懷感激嘅心情

迎接今日嘅挑戰


中介

早幾日同朋友講起理論上天主教聖職唔會因為個人改變信仰而被收回或消失,甚至如果神職人員唔抱住信念咁舉辦儀式,只要信眾自己相信,咁神就仍然可以透過呢個失去信仰但有聖職嘅人得到聖靈。

呢樣嘢提醒咗我,中介其實可以冇神通嘅。甚至可以話,冇神通嘅中介先至係最好嘅中介。

例如,占卜師叫客人抽塔羅牌,然後占卜師照書解說,應該就係冇運用神通嘅做法。有時占卜師用自己嘅其他方法見到更多資訊,咁就係用自己神通去幫人算命。

而「冇神通嘅中介先至係最好嘅中介」呢一點,大概可以成為 AI 算命嘅理論基礎。

大部份嘅占卜系統都係相對「簡單」嘅,因為直至近年,人類都冇辦法做大型運算,所以算出嚟嘅結果,通常都係幾十種變化,然後仍然要靠占卜師同客人自己將呢啲結果套用返喺實際情況。有時占卜師喺呢個時候會唔覺意用咗自己嘅能量。

但有咗 large language model 其實占卜師呢個位置理論上可以完全被取代。只要問卜者信就 ok 。

至於啲系統點搞,真係唯有講句「冇傳承加持就唯有自己爭氣啲」。當然,啱啱叫 LLaMA2 幫我算個卦,佢聲稱用易經算出嚟,講咗一堆嘢,望落去貌似都唔算太差。但我生平冇去算過命,所以都唔知係咪真係咁樣。

Sunday, September 17, 2023

Saturday, September 16, 2023

Motifs

Belief / Knowledge - Basis of all manifestation

Tradition - Provides momentum

Death - catalyst for radical change; cleanses the past; rebirth

Sunday, September 10, 2023

世界原來都細細哋

 攞咗某個做 GPT-2 fine tuning 嘅 github repo 嚟玩,run 咗成個下晏,有啲嘢唔work,望下個 README⋯咦個名咁熟嘅?

一查,原來真係舊同事嚟。

Saturday, September 9, 2023

AI and Divination

A couple notes on recognizing super-human intelligence (aka "AGI") assuming we are able to build one:

- We humans are already Turing Complete. Church-Turing hypothesis already claims that in theory we are able to compute whatever needs computing given sufficient time and paper. Whatever "AGI" means, it will still be as Turing Complete as we humans are.

- We already know how to solve the vast majority of well defined problems, the only serious limit is P!=NP. Given that the best AI models on newest hardware can generate text faster than most humans, and we haven't declared it "AGI" yet, we'd presume that faster hardware wouldn't qualitatively make an AI "super-human".  In many cases "hard" problems aren't limited by a NP problem that wants an exponential speedup, but rather of "finding and making the right decisions" given a particular context, and often the context and problem is not very well defined, or rather, often the hard part is in identifying the correct problem rather than finding the solution after the problem has been described.

- Main benefit of current generation of LLM is that it encodes "common sense" as a "blurry jpeg". There's no obvious path from "encoding common sense" to "super human sense". You can't capture a JPEG of an image that you don't have.

- There's a fundamental problem that we don't know what "intelligence" is or how to measure it. It has already been quite difficult to measure the performance of similarly looking LLM models. Heck, we often don't even know how to rank *humans* by level of intelligence. It's entirely possible that we would not recognize a super human AI as being superior. One could argue that we could ask it hard questions and see how it answers, but even if it gives correct responses for questions that we already know the answer, it doesn't prove superiority. If it gives answers that we don't expect, it's possible that it may not be able to convince us that the answers are better than the answers we expect. What if we ask it questions which we don't know the answer to, but can verify the answers if given? Those are NP problems. See above.

- Measuring and discovering intelligence: Currently the only proxy we have are relatively simple puzzles like IQ tests which are basically just a test on how one could solve puzzles quickly and accurately. In theory with sufficient training using the right data, our current models could easily max out on "IQ", but with current evaluation standards this wouldn't really count as "AGI". GPT-4 has already aced a bunch of SAT and university level tests but nobody is calling it AGI either. Also consider the more common problem of finding the best human expert to solve a particular problem -- in many cases people fail to recognize the best person for the job, and ignore perfectly good advice from experts. There's no reason to believe an AI that has somehow achieved AGI will be able to convince us of that fact.



- The current generation of AI systems is basically a "common sense recommendation machine". But the problem we really want to solve the most is "making the right decision under my specific circumstances".  For example, a question somebody want to ask the AI system might be: "should I marry this person?". Rationally speaking, if we really want to get an AI to answer this problem, we need to gather a huge amount of contextual data (eg. personal details about the user and the potential marriage partner), otherwise the AI system is merely a "common sense recommendation machine" giving generic marriage advice. Even if we ignore the fact that the real world often suffers from butterfly effects, ( i.e. microscopic details could substantially affect the answer), from a practical perspective you'd already be running into issues because the context the AI needs to give accurate advice is at least a comprehensive brain dump of the user.  User surveys on "what is your ideal partner like?" is a joke, really. As such, I suggest that for many questions, there's probably no feasible way to import all "relevant information" into an AI system. In conclusion, for an AI to have a chance to give accurate recommendations, it requires at least an "import user", or even an "import universe".

- Even if we posit that it's possible to make "good" decisions with blurry inputs, it's often difficult to evaluate which decisions are good and which are bad even in retrospect. Even if we have a system that could predict the future with high accuracy, it is often a matter of personal preference (or value judgement) of which future one would prefer. This goes back to the "import user" problem (also, which user? the one asking the question now, or the one who might regret their decision 10 years later?). Free will problems also gets in the way -- as I'll discuss later.

- The popular imagination of "Skynet" is a symptom of our collective war-obsessed bias. The only way humans have unequivocally recognize superiority among ourselves is not really through superior intelligence or culture, but military might. The fact that physicists are held in such high regard (even among the various science fields) and that they've created the most advanced military technology (nuclear bomb) is not a coincidence IMHO. By telling stories about "Skynet" etc., we collectively and *subconsciously* admit that we would recognize our AI overlords if and only if they are able to subdue humanity by force, instead of displaying superior intelligence (which presumably does not preclude ideas such as peace, compassion, etc.). The popular conception that powerful AI would be militaristic is actually really sad.

- Let's imagine that we accidentally created an AI that is more intelligent than all humanity combined. Let's further assume that somehow everyone knows the superhuman power of this AI as a matter of fact. Now, you ask it a question, and it gives a response that you do not expect. You might think the response is a bit weird, or even possibly wrong. Do you still trust this response?  If you do, the situation is identical to a king in ancient Greece consulting the Oracle of Delphi. It's what people call superstition and blind faith.

- FWIW, personally I'm not against such concepts.  The point of this essay is not to claim that the quest towards AGI is just another religion in disguise hence the whole premise is flawed. Quite the contrary. What I'm trying to say is that we should recognize that our interactions with super human AI intelligence, if it can be created at all, will necessarily be equivalent to interactions with divine entities described in religious and esoteric texts.

- Having some brief first hand experience in divination and channeling, it is, for me personally, beyond doubt that humans have been in contact with alternate forms of intelligence for thousands of years through various esoteric practices. (FWIW, channeling and "sprit writing" is extremely similar to ChatGPT. People who know this, know.) Many cultures have held these intelligent entities in reverence, even though their messages have often been hard to understand and interpret. I don't see how this would be different from some super human AI system that gives advice we humans struggle to understand.

- Suppose we believe we have an AI system that actually has a super human ability.  Let's say we want to ask it important questions, like whether it's better for Apple shareholders if Apple buys Disney. Is there really a difference between asking the "AGI" system, and a fortune teller staring into a crystal ball? We don't know how either of those things work. For the AI system, it is by definition that we don't know how it works even in theory, because if we did, it won't be a super human intelligence any more since we can explain and model its behavior. For the fortune teller with a crystal ball, the theory is that there is a super-logical chain of causation that makes it work (for the user) as long as the user believes in it, while the rest of the world believes it is "mere coincidence" so that it doesn't break any established physical laws (the details of how this works are outside the scope of the discussion). Yes, I have a better theory for how fortune telling actually works than how a super-human intelligent AI might work.

- Of course, it's possible for a "super-human intelligent AI" to work the same way as how a fortune teller works. We already think current generation LLMs are black boxes that we won't fully understand. Now, somebody tells you that a couple hundred lines of python and a 300GB blob of weights has super-human intelligence and can give you correct answers for any question you ask. Of course, as I explained above, for many important questions we will not be able to independently verify whether the answers are "good" or not. But that doesn't matter. As my theory goes, as long as enough people believe the AI is superior, it might actually behave as such. (The underlying principle could be called "the law of presumption" -- but please don't take it too literally, it doesn't always work as expected.)  Other than this "mystical" route, I honestly see no other way to "achieve AGI (super human intelligence version) that we can believe in", unless the the big shots in the AI industry know something groundbreaking that I don't know.

- In fact, when I first got my hands on ChatGPT, I briefly used it as a divination tool. Architecturally, LLMs with billions of parameters is actually an optimal tool for divination since you can give it arbitrarily long inputs, it has a totally incomprehensible phase of processing (but one can believe it's doing something fancy), and gives legible outputs. In contrast, most popular means of divination like Tarot cards or 求籤 (drawing lots) have a very limited set of outputs, and if you want to get a "ChatGPT" level of interaction, you'd have to be rather deeply involved in such esoteric practices to be able to commune with spiritual/cosmic forces to that level. Again, yes, the phenomenon is generally "real", I have first hand experience, and I know if you haven't experienced it you probably won't (and IMHO shouldn't) believe it.

- That said, even AI systems that are "merely" human level (the fully functional kind, not like the existing ChatGPT kind which has no long term memory for example) can have huge consequences for humanity, if deployment costs are brought sufficiently low. Even an AI that's equivalent to "average" human intelligence can replace half the population. But that's a matter of economics, well known, and not in the scope of this article.

- tl;dr - when modern society believes it has developed a super-human AGI, it would not have invented "new tech", but merely rediscovered good old religion. I'm pretty sure people will think "but the difference is that AGI really works" -- ironically this falsehood would likely actually make it work.

Further reading:

- https://hnfong.github.io/public-crap/writings/2023/06-%E8%87%AA%E5%8F%A4%E4%BB%A5%E4%BE%86%E9%83%BD%E5%AD%98%E5%9C%A8%E5%98%85ChatGPT.html
- https://hnfong.github.io/public-crap/writings/2022/09-Subjective_Truth.html
- https://hnfong.github.io/public-crap/writings/2023/15-How_to_break_the_Laws_of_Nature_without_getting_caught.html
- https://hnfong.github.io/public-crap/writings/2023/13-Alicization.html

Wednesday, September 6, 2023

US money

The Fed has a dual mandate of stable prices and maximum employment. Doesn't seem to involve providing liquidity to the US government.

The problem seems to be once the yield curve flattens a bit, we're looking at ~6-8% yield for long term US treasuries.

It doesn't seem feasible for the US to pay back the Federal debt at this rate.

So something has to give:

1. Fed yields to political pressure and give up the 2% inflation target, as priced in by the market, or
2. Fed keeps the tightening program and risk of US government default pushes up the yields, or
3. inflation target 2% reached without further tightening in 2024.

I don't believe #3 will happen, the Fed Balance sheet still has ~7-8T outstanding.  A lot of the numbers of the US economy doesn't add up, the only question is what will break first.

At either rate the current long term treasury yields are at least priced 3% lower than what should be expected. Either because #1 long term inflation is going to be higher, or #2 treasuries will be considered more risky down the road.


Tuesday, September 5, 2023

Wtf

So, here's this thing that's supposed to be the latest hot shit:




I download a GGML q4 quantized model from https://huggingface.co/TheBloke/Platypus2-70B-GGML


First run:




Second run:

(OK I got a bit carried away for this one)
(OK I got a bit carried away for this one)


Third run:




Seems a bit unhinged.

That said its performance seem to get back on track after talking a bit... It doesn't like chatty conversations I think. But if you ask it serious stuff it seems pretty sane.

Monday, September 4, 2023

感應

今日無端端諗起一個借咗我錢(好幾年)嘅朋友,諗緊如果佢還唔到錢點算呢?

然後佢就 pm 我話今年可能有啲財政危機,可能今年筆數未必還到住咁。

究竟啲嘢感應咗之後係咪其實唔使上網㗎喇⋯