Programmer, trans, an IndieWeb addict. Doesn't know if she's real or merely a vestige of a past long gone. Opinions are my own and do not represent opinions of any of my employers, past, present, or future. If I start to shill some cryptocurrency project, RUN β this is not me.
This section will provide interesting statistics or tidbits about my life in this exact moment (with maybe a small delay).
It will probably require JavaScript to self-update, but I promise to keep this widget lightweight and open-source!
JavaScript isn't a menace, stop fearing it or I will switch to WebAssembly and knock your nico-nico-kneecaps so fast with its speed you won't even notice that... omae ha mou shindeiru
Wait, how would you make client certificate auth work on a subpath, if it works on a layer below HTTP? One thing that comes to mind is making client certs optional and doing a 401/403 on pages that should require auth, but that should absolutely be doable with a liberal application of subroutes in Caddy. (The server will still request certificates, because it doesn't know which page you're visiting until the TLS handshake completes, but since the server indicates which roots it trusts, this shouldn't be a problem for random visitors β their user agent will skip the prompt if there are no suitable certificates.)
Oh, finally remembered that I can put resize set width command into a for_window clause to get some of my GTK apps resized to a small sidebar on my display. Bowl looks much nicer on a small display; mainly because only the post composer widget is implemented, and a two-column layout that I'm thinking of will require a few more things the app lacks for now.)
Now, if only Sway included a positioning command that puts windows relative to the top-right or bottom-right corner...
(I suppose that is not hard to implement; I was able to implement it using an external script, but an external script incurs a delay that native commands do not. Thus it would be beneficial to patch Sway to support corner-relative window moves.)
TIL that HTML forms with method="POST" and an absent action (meaning the form POSTs to the same URL it was loaded from) do not clear the query string, allowing to pass through the query string that was initially passed to the page of a form.
Random thought: perhaps modern LLM interfaces are oversimplified, which leads users to unnecessarily overestimating their capabilities (such as ascribing "intelligence" or "sentience" to the models).
Perhaps a good LLM interface should expose its guts and details so it is obvious how it works.
Deliberate friction or dizzying complexity might be sobering for the end user a little.
Had a fight with the Content-Security-Policy header today. Turns out, I won, but not without sacrifices.
Apparently I can't just insert <style> tags into my posts anymore, because otherwise I'd have to somehow either put nonces on them, or hash their content (which would be more preferrable, because that way it remains static).
I could probably do the latter by rewriting HTML at publish-time, but I'd need to hook into my Markdown parser and process HTML for that, and, well, that's really complicated, isn't it? (It probably is no harder than searching for Webmention links, and I'm overthinking it.)
I really need to make something to syndicate to Bluesky. It seems wonderful to have a new alternative to the now-dead Twitter, but I still want to post to my blog first.
ATProto feels a tiny bit overengineered. It was obviously built to have a semi-centralized reach layer, and that shows in its design. Plain HTML pages and/or microformats2 are a much simpler format, and at times richer than Bluesky's default Lexicon.
Mozilla is playing with fire. I don't like their latest "AI" pivot. AI doesn't exist and never will, and whatever is called AI right now is not it. And is not worth using.
Seriously, "AI text" "detectors"? They don't really work that well. They sometimes also misidentify input texts written by someone not proficient with language as LLM output.
Tailscale, without any sort of warning or public announcement, seems to have banned all Russian IPs from connecting to its coordination server.
I had to spend an entire workday migrating my setup to Headscale, the self-hosted alternative! I could've spent this time playing games or working, if not for this bullshit!
This "pseudo-sanctions compliance" virtue signalling must stop. All lawyers and PR personnel responsible for this should be fired and shunned. VPNs are critical to allow people in oppressive countries to get truth via the Internet, and just banning them from connecting to VPNs is exactly what the oppressors want.
If I were to include a quote made by a language model on my website, I'd like them to be specifically highlighted to make it obvious that the output was not written by a human.
<figure class="llm-quote">
<blockquote>
<p>I'm an artificial intelligence model known as Llama. Llama stands for "Large Language Model Meta AI."</p>
</blockquote>
<figcaption>
Output generated by Llama 3.2-3B
</figcaption>
</figure>
To get something like this (I sure hope this will display correctly! I still need to tweak my Markdown parser a bit.):
I'm an artificial intelligence model known as Llama. Llama stands for "Large Language Model Meta AI." I was developed by Meta, designed to process and generate human-like language. Like other large
language models, I use natural language processing to understand and generate text. My primary function is to assist users with information and tasks, answering questions, providing definitions,
summarizing content, and even creating text based on a prompt.
I don't have a personal experience, emotions, or consciousness like humans do. Instead, I operate by analyzing patterns in the data I was trained on and using those patterns to generate responses to user
input. My knowledge is based on the data I was trained on, which includes a massive corpus of text from various sources, including but not limited to books, articles, research papers, and websites.
I am constantly learning and improving my language understanding and generation capabilities. This is done through machine learning algorithms that allow me to refine my performance over time. However,
my limitations are also important to consider. I can make mistakes, particularly in situations that require a deep understanding of context, nuance, or subtlety. If you have any questions or need
assistance with a task, feel free to ask, and I'll do my best to help!