By far the best vegan meatballs we've made have been the It Doesn't Taste Like Chicken Vegan Italian Meatballs. Previous recipes we've tried have been black bean based. They tend not to hold together well enough. These ones don't fall apart!
I was able to get tracecontext passthrough to OTel working! I was missing the opentelemetry_sdk::trace::TracerProvider
which pulls in the infomation to help build the OTel data that gets sent to collectors.
I'm exploring using Github Apps for w2z instead of fine-grained personal access tokens (PATs). Replacing PATs every 90 days is a bit tedious. Eventually the app flow should give a better experience.
I guess I'll look forward to PostgreSQL 17, between better upserts and some label improvements.
The MERGE
seems to take more code than I'd like, I wish ON CONFLICT
didn't bloat and could have an option for ALWAYS RETURNING
that would return the row even if not modified. I'd deal with the bloat if it were simple code and always returned the row.
Interesting to see Rust Otel discussing the friction for (tokio)-tracing
. Tracing with DataDog was really convenient. I always wanted to get our shop moved to Otel to eventually be able to move away from DataDog though, partially due to pricing, partially due to more adoption of Google Cloud and not wanting to ship data out just for tracing.
I lean towards wanting tokio-tracing to be a/the supported case. I haven't found a solution with Otel that I really like. The initial setup has always seemed tricky, and with a cross-cutting concern like tracing, adopting Otel seems heavyweight. A single ecosystem (Otel) across language stacks is super appealing. At this point I'm mostly writing Rust things though so that isn't a huge selling point for me.
I do finally have a tracing setup in a lib that is working well, but haven't gotten distributed tracing going yet. I'm fine adopting Otel for wire formats but I'd like to keep tracing
in my app. I've yet to find a good way to thread W3C tracecontext into Otel output.
Definitely caught off guard with readTimeout
in Traefik. Upping that fixed frequently-dying Postgres connections though!
you should know that if the algorithm chooses you it has nothing to do with the quality or value of your work. And I mean literally nothing. The algorithm is nothing more than a capitalist predator, seeking to consume what it can, monetize it quickly, then toss aside. If you make the algorithm your audience, you get very good at creating for an audience of machines rather than humans. Creating for humans is harder, it may get you ignored by the algorithm, but your work will be better for it, and it will find an audience in time.
It looks like Github branch rulesets allow setting a bypass for specific app integrations! This should allow my Github app to avoid making a branch, PR, and auto-merging... which would be nice eventually!
First time giving rulesets a try
There is an open PR to improve Docker registry garbage collection. It helps clean up multi-arch images, which I tend to pull a bunch.
Giving it a try with a personal build it wound up reducing my registry size by about 50%!
Saw Daybreak via Simon Clark and would love to give it a go at some point! The collaborative nature of the game and the reality of climate action seems really appealing.
I was confused when I didn't see a Vegan label on a Silk carton. I'm not really less confused after reading the FAQ.
Works for me to try cleaning out a Docker registry: Docker Registry Cleaner. Keeps the last N images of each repo as determined by a label on the image.
On each deploy to my homelab I set a label on the image to be used so this cleans up images I'm no longer using.
Super alpha-y state... will likely result in data loss!
Last spring we found it slightly too cold to go camping when we otherwise had the chance. Maybe time to pick up a warmer sleeping bag.
DNSimple sent me an email with the subject:
asdf
and the body:
asdf
I assume someone there clicked something too soon.
Cool looking Rust crate registry, Cratery. OAuth login, S3 storage. Guess could setup Litestream for DB storage as well.
Anyone using DecapCMS? I gave it a try a while ago and looks like there is now Nested Collections which should now match how I have my site setup.
The Miniflux option on feeds ‘Disable HTTP/2 to avoid fingerprinting’ seems to fix any feeds that seem to be blocked (maybe by Cloudflare?).
Seeing more and more Github repos with renovate.json
files as folks switch from Dependabot to Renovate. The app makes setup super easy. I'm pleased with it so far for the ~10 repos I've moved over.
I never really followed many folks on Nostr but I did at least update my Nostr -> RSS bridge to linkify URLs. A step towards embedding appropriate image/video tags in the RSS feed when things look like images/videos.
Based on a Github issue I was able to get a collection with Zola taxonomies writing the proper TOML-frontmatter format.
collections:
- ...
fields:
- label: "Taxonomies"
name: "taxonomies"
widget: "object"
fields:
- { label: "Tags", name: "tags", widget: "list", allow_add: true}
This is written out into the frontmatter as
+++
[taxonomies]
tags = ["tag"]
+++
Worked on a PR for sqlparser to struct-ify some things. Part of making a SQL create table
declarative way to generate SQL migrations at runtime. I had a version in Erlang before and enjoyed it enough that I want it in Rust.
sqlmo does part of the migration generation piece and I'm working to add some pieces there as well.
I missed a release of sqlparser
though so the embed-able version of my lib will need to wait for merge/release before I can push my version to crates.io. The bin version of it will be something like the atlas OSS tool, but without any cloud component.
New version of Reeder now out. I've been using Reeder (now Reeder Classic) since ~2012 (v2.5.4 from what I can tell in old email receipts).
This new version is a redesign that doesn't seem relevant for me at least. The single timeline view isn't how I want to view feeds but I can see how that is what many people might be used to coming from social media.
Charles Schwab seems to fall into a frustrating trap. An ACH transfer can be "complete" but there is still more to do (clear).
You Keep Using That Word, I Do Not Think It Means What You Think It Means
I recently tried running OpenLLM in my homelab. It was super nice to setup and run but pretty poor performance running models on CPU. A GGUF model on HF was running pretty quickly but OpenLLM doesn't seem to support GGUF.
Running text-generation-webui with a Mistral GGUF model has been really nice. The performance on an AMD Ryzen 3 5300U is super usable!
So I don't forget...
As seen on Reddit ...:
Don't forget to set the viewport when starting a new site
<meta name="viewport" content="width=device-width, initial-scale=1.0">
I think I've always had a pleasant time upgrading my NixOS boxes. For my homelab servers
sudo nix-channel --add https://channels.nixos.org/nixos-24.05 nixos
sudo nixos-rebuild switch --upgrade
sudo reboot
Has "just worked". Keeping most of my lab stuff in Nomad helps with this as well as there isn't much that will change at the OS level, mostly Nomad, Consul, and Vault.
Having an EV charger at vacation house is such a quality-of-life improvement. We didn’t seek out a house with a charger, but now I think we always will.
A new sqlparser release pushed out my changes for supporting declare-schema! I released 0.0.1 but haven't gone through and updated my services yet.
In a month or so of using the dev version of declare-schema
in my services I've had a good time! Since I wrote it with my mental model for services in mind, it works as I want it to! The library itself would probably wind up with significant data loss for others.
At some point I want to try Wired Elements for styling of personal projects to help present the roughness for which I usually leave my projects...
In today's "Solving Problems I Create For Myself" news:
I updated my `docker-prefetch-image daemon to attempt pulling from alternative Docker repositories in the event an image pull fails.
This fixes the case where updating my Docker Repository infrastructure prevents pulling/running new Docker images. Now the prefetching will attempt to pull from my local repository and fall back to Docker Hub, then tag the image as if it came from my repository.