「玄之又玄,衆妙之門。」
(Where the Mystery is the deepest is the gate of all that is subtle and wonderful.)
— 老子,道德經 (Laozi, Dao De Jing)
this is my little personal website where i share my silly thoughts and the things i love: mostly, cute-and-funny anime, manga, and vtuber related stuffs! feel free to look around and UOHHH to your heart's content~
i sometimes think i've found my way only to realize i'd but momentarily forgotten i'm lost, but perhaps being lost is the point, just as perhaps there is meaning behind every flubbed word and bitten tongue (>ω<)
噛みまみた?いいえ、神はいた、全く🙏
awawawa awawawa. awawawa awawa, wawa wa awawawawa awa awawa wawa awawawa! awawawa wawa awa. awawawawawa awawa, wawawawa awawawa, awawawawawawawawaw wawa awawa awawa wawawawa!
here are lists of some of my favorite lolis, manga, anime, VNs/games, and vtubers! i might come back to this at some point and write some comments for each~
i go by many names but you can call me "muon" (繆音). as u probably already know, i love anime, manga, and vtubers, especially cute and funny stuff like cgdct, lolis, and yuri. i also enjoy drawing, making music, reading, video games, and learning new things. first and foremost, i'm an otaku, and this is perhaps the only true identity label i ascribe to; nationality, gender, etc. are irrelevant to me outside of their pragmatic necessity.
i'm currently a phd student with general research interests in linguistics and cognition, hoping to finish within the next year. i've lived in many countries all around the world throughout my adult life so far, and speak many languages to varying degrees of proficiency. english is my native language, with mandarin being my next best language. other languages i'm decently proficient in include: japanese, thai, cantonese, korean, and indonesian. i also know some finnish, estonian, russian, polish, and general romance (italian, spanish, french), though i'm pretty rusty with these and relatively haven't invested as much time into them. i wouldn't call myself a 'polyglot', though, as i think we're all polyglots even if we don't realize we're doing it. i also don't think languages are very meaningful units of...well, anything (please don't ask me how many i speak).
i am a nijikon (attraction to '2D' anime/manga characters, as opposed to real '3D' humans) and very strictly 3D celebate. i personally value choice over sexuality so i find the question of my own sexuality irrelevant; likewise, i personally don't believe consent can be reliably established in real romance/sex, so abstinence (and the fictional expression of romance/sex) are a matter of both preference and personal ethics for me. in terms of preference, well, you can probably tell i like lolis and yuri, and i have a strong dispreference for anything masculine or phallic.
i made this website mainly because i'm tired of big tech and mainstream social media, all of which manipulates us for profit and kills creativity, trust, and civil discourse. governments keep passing more censorship and surveillance laws in response, which only adds more fuel to the fire. but i realized, anyone can reject that and build their own site, take control of their virtual space, and express themselves properly outside the sterile confines of modernity, in the spirit of the old internet, where indie spaces ruled instead of algorithms.
「玄之又玄,衆妙之門。」
(Where the Mystery is the deepest is the gate of all that is subtle and wonderful.)
— 老子,道德經 (Laozi, Dao De Jing)
「この世界はあまりに巨大な複雑なので、色んなものが不明のまま過ぎ行ってゆきます。」
(The world is so massive and complicated that many things pass us by without us even knowing.)
— つくみず、少女終末旅行 (Tsukumizu, Girls' Last Tour)
"I know that I know nothing."
— Socrates
"There is no law except the law that there is no law."
— John Wheeler
"In the real world, one has to guess the problem more than the solution."
— Nassim Taleb, Fooled by Randomness
"Extraordinary claims require extraordinary evidence."
— Carl Sagan
"It is the mark of an educated mind to be able to entertain a thought without accepting it."
— uncertain (commonly attributed to Aristotle)
“When I use a word,” Humpty Dumpty said in rather a scornful tone, ”it means just what I choose it to mean — neither more nor less.”
“The question is,” said Alice, “whether you can make words mean so many different things.”
“The question is,” said Humpty Dumpty, “which is to be master — that's all.”
— Lewis Carroll, Through the Looking-Glass
nihahaha~ koyuki smirks as she beats you in poker for the 100th time in a row. how the hell does this brat do it? her smugness certainly demands correction. but, while she might not realize it herself, she's actually being very cunny and has a natural knack for something called "bayes' theorem." what is bayes' theorem? it's a magical tool that shows us why we're all stupid, and how to better manage our stupidity by questioning our assumptions, remaining open to hidden possibilities, and thinking in probabilities instead of absolutes.
before we go further, let's nail down what we're actually talking about. probability is just a number between 0 and 1 that describes how likely something is. 0 means "no way that's happening" and 1 means "definitely happening." everything else lives in the middle.
you use probability all the time without thinking about it. "probably it'll rain today" or "i doubt they'll say yes" or "there's a good chance i'm being paranoid." those are all probability statements, just fuzzy and in words instead of numbers. the cool thing: probability can be about literally anything. not just coin flips and dice rolls. probability that your friend is mad at you. probability that a movie is good based on the trailer. probability that you're right about something. once you think in probabilities, you stop thinking in absolutes. and that's where better thinking starts.
here's a simple question: if i told you a coin landed on heads, how sure are you it's a fair coin versus a weighted one? probably pretty sure it's fair, right? you've seen tons of fair coins and basically zero weighted ones. that "pretty sure" feeling? that's your prior. it's the bet you're making before you even see the evidence.
now here's where it gets weird. what if i told you someone handed me a bag with 99 head-weighted coins and 1 fair coin, and i pulled one out and flipped it? same coin flip, same heads result. how sure are you now?
notice anything? the evidence is identical. heads is heads. but your confidence should flip. why? because your prior just changed. you went from "almost certainly fair" to "probably weighted." but here's the thing nobody talks about: you probably didn't think about the bag scenario at all in the first case. you just let your real-world experience do the talking. that's fine. that's how we navigate life. but notice that your prior is basically an invisible assumption you're making without really questioning it.
what other invisible assumptions are you making right now, in things you're confident about?
okay, we're going to introduce some symbolic shorthand now. don't freak out. it's just a cleaner way to write down what we're already talking about. when we write something like p(cute|funny), we're asking: "what's the probability that something is cute, given that we know it's funny?" the vertical bar means "given" or "assuming we know." everything before the bar is what we're trying to figure out. everything after is what we already know.
here's some simple examples to practice the notation:
notice something: p(cute|funny) is not the same as p(funny|cute). this is the trap we talked about. "if it's cute, it's probably funny" is different from "if it's funny, it's probably cute." or a more concrete example: p(koyuki says "nihaha" | she did something naughty) is different from p(koyuki did something naughty | she says "nihaha"). for example, maybe koyuki nihahas every time she does something naughty, so the first probability is 1 (100%), but she also nihahas for other things, so the second probability is <1. one conditional direction doesn't equal the other. this is a confusion that gets people in trouble constantly.
okay, imagine a doctor tells you that you tested positive for a rare disease. the test is 99% accurate. should you panic? most people would. but here's the problem: "99% accurate" probably means p(positive test | you have disease) = 0.99. that's "if you actually have the disease, the test catches it 99% of the time." but what you actually want to know is different: p(you have disease | positive test) = ? that's "given that you tested positive, what's the chance you actually have it?" those might sound like the same question. they're not.
this is where people get genuinely confused, and it's not because they're uniquely dumb. it's because our brains naturally flip the condition without us noticing. here's the real kicker: if the disease is super rare like 1 in 10,000 people have it, then even with a 99% accurate test, a positive result might only mean p(disease | positive test) = 1%. wild, right?
this happens constantly. someone seems nice to you, so you assume they're trustworthy. but niceness and trustworthiness aren't the same. a stock has gone up, so you assume it'll keep going up. but past performance and future results aren't the same.
the question to ask yourself: am i confusing p(B|A) with p(A|B)?
here's the real beauty of bayes, though: once you understand the difference, you can use it to collect evidence and update your beliefs. if we go back to our rare disease example, you currently have a 1% chance of actually having the disease given that initial positive result. but what if you go to the doctor and take a second test, and you test positive again? what should you believe? well, now you can take that 1% and make it your new prior p(you have disease). the second test result becomes your new evidence. and here's the magic: your new belief p(you have disease | positive test) jumps way up. suddenly that 1% becomes closer to 50% (it's more unlikely you'd test positive twice for a rare disease with high-accuracy tests). one piece of evidence wasn't enough to convince you. two pieces? now you're genuinely worried.
this is how bayes actually works in the real world. you don't get certainty in one shot. you stack evidence on top of evidence, updating as you go.
try the interactive bayesian square below. play with the sliders. set a prior like p(cute) (how likely you think something is to start with; e.g., the base rate for a rare disease) and then adjust the likelihoods like p(funny|cute) and p(funny|not-cute) (how well the evidence fits each possibility; e.g., how likely you are to test positive if you have the disease versus if you don't have the disease). watch what happens to the posterior (your belief that something is cute given it's funny; e.g., the probability of actually having the disease given your evidence). notice how the same evidence gives different answers depending on where you started (the base cuteness/disease rate). then click on different parts of the equation to see exactly which colored section it's representing (the pink-striped area is where the event actually occurs with respect to the dark blue-outlined area).
try updating your beliefs and adding new evidence. how much does different evidence affect the outcome? if the evidence is less likely under the positive (cute) condition than the negative (not-cute) condition (i.e., p(funny|cute) < p(funny|not-cute)), what happens when you update your belief? what happens when they're equal? (hint: if they're equal, it means the condition has no effect on the outcome.)
here's a tricky one. let's say you read a news headline that fits perfectly with something you already believe. it confirms your suspicion. you feel that little rush of "aha, i was right." but ask yourself: how many headlines that contradict your belief did you see today and just scroll past? how many alternatives to this story even exist? what would have to be true for this headline to be false?
when you only think about whether the evidence supports hypothesis A (e.g., something being funny supports it being cute), you're missing something huge. you should be thinking about whether it supports hypothesis A compared to all the other possibilities (e.g., something being funny supports it being not-cute). in other words, what is the probability of hypothesis A out of all possible hypotheses (e.g., the full probability of something being funny).
this is about marginal probability (you probably noticed it in the square above). it sounds fancy, but it's just the reminder that you're choosing between options, not floating in a void. if you're trying to figure out why your friend hasn't texted back, you might think "they're mad at me" fits the evidence (no text). but so does "they're busy," "they lost their phone," "they're asleep," etc. how likely is each one, really? and when you add up all the "they're not mad" scenarios, suddenly your original hypothesis looks less solid.
the real trap: you only thought about one alternative. the world is full of explanations you haven't considered. so here's the question: what explanations are you not thinking about? and how much confidence should you put into each one?
this is where you get to see how uncertainty actually works in real time. below, there's a coin flip simulator. but here's the twist: you get to play god and set the truth first. pick what you want the underlying probability to be (is it actually a fair coin, or is it weighted toward heads, or tails?). then flip it. a lot. watch the visual update with each flip.
the posterior distribution: after each flip, the graph shows you a distribution. this is a visual way to say "here are all the possible probabilities the coin could be (i.e., the possible explanations), and how likely each one is based on the evidence you've collected so far (i.e., where you should put more confidence)." the tall pointy part after you do some flips? that's the most likely value. if the peak is at 0.5, you think the coin is most likely fair. if it's at 0.7, you think it's probably weighted toward heads. the shape of the curve tells you something important too. a tall, skinny curve means you're pretty confident. a wide, flat curve means you're uncertain.
here's the key: at first, before you've flipped it at all, the curve is super wide and flat. that's because you assume you barely know anything. as you flip more and more, the curve gets taller and narrower. the evidence is making you more confident.
below the graph you'll also see something called a "95% CrI" (credible interval). this is a range showing "i'm 95% confident the true probability is somewhere between here and here" (the blue-shaded region under the curve.) early on, it's huge. later, it gets tiny. more data means more confidence.
you'll also see two meta-probabilities: one for the probability "it's weighted heads", p(p(heads) > 0.5), and one for the probability "it's weighted tails", p(p(heads) < 0.5). these add up to 100% (or close to it if you're right on 0.5). watch how they shift as you collect evidence.
try it out: start with the default uninformative prior (that flat, uncertain starting point where you think everything is equally possible). flip it a few times. probably looks chaotic. flip 100 times. the picture gets clearer. now reset and set the coin as fair, then flip 10 times. probably looks fair. flip 100 times. definitely looks fair.
now try something different: set the coin as 60% heads, but adjust your prior sliders (alpha (α) for more belief on heads and beta (β) for more belief on tails) to represent a strong starting belief that it's fair (like heads 50, tails 50). watch what happens. how many flips does it take before the evidence convinces you it's actually weighted?
then try it again with a weak prior (heads 5, tails 5, very unconfident). same coin, same truth, but now how fast does the evidence sway you? the difference is wild. this is real uncertainty. this is what data actually looks like. this is why "i saw one example" isn't enough, and why you should be humble about what you think you know.
keep playing around with it, setting different truths and trying different priors, and and pay attention to how the evidence you collect with each coin flip changes your beliefs. what happens when you start out very confident that the coin is weighted tails, but it's actually weighted heads? how long does it take you to change your mind?
tosses so far: 0 (0 heads, 0 tails)
posterior mean: -- (sd: --)
95% CrI: [--, --]
p(p(heads) > 0.5 | tosses): --
p(p(heads) ≤ 0.5 | tosses): --
here's what you might be feeling right now: "okay, so i'm basically wrong about everything?"
not exactly. but you are probably more confident than you should be. you probably have priors you're not aware of. you probably confuse p(A|B) with p(B|A). you probably haven't thought about half the alternatives. and you definitely don't have enough data to be as sure as you feel.
this isn't cynicism. it's actually liberating. because once you get comfortable with uncertainty, you get better. you update faster. you catch yourself confusing things. you ask "what else?" instead of stopping at the first explanation.
the real skill isn't having the right answer. it's knowing that you might be wrong, and being curious enough to keep thinking anyway.
so next time you're sure about something, try this:
you won't get everything right. but you'll get less wrong, and you'll hopefully be less prone to being a sucker. and honestly, that's the whole point.
want to learn more? check out these resources:
1. beginner: Introduction to Probability, Statistics, and Random Processes by Hossein Pishro-Nik
2. intermediate: Bayesian Statistics by Open University
3. practical application (modeling with bayes): Bayesian Modelling Using the brms Package by Coding Club
4. advanced (for those who really like math): Bayesian Data Analysis by Gelman et al.
5. advanced (for understanding human cognition with bayes): Bayesian Models of Cognition: Reverse Engineering the Mind by Griffiths et al.