I ran into a problem with my homelab Minio cluster where a single node was able to corrupt the ls
of a bucket. With that node turned off ls
showed the latest files, with the node running ls
wouldn't list any files added in the last month. Writes were fine in either case, other than not being ls
-able. A manual heal on the cluster seemed to fix it fine enough.
At the end of 2024 my family started a project to remove natural gas / methane service from our house. As part that we began to replace our gas furnace with a heat pump.
The contractor mounted the outdoor unit on the wall of our house in a side alley that kept it out of the way.
This helps save space on the ground and mounting a couple feet above ground gives space for the condensate to drain when in the unit is heating our house. This does not help when the unit is a source of vibration and with the mounting causes structural vibrations in my house.
Mitsubishi has a helpful application note regarding the frequencies of the compressor at various times. During startup of a cycle in the 20-50 Hz range my house would vibrate and could be felt in various rooms through the house.
Pads (the blue/black pieces) were placed in the mounting to help address the issue.
They... might have helped. It's hard to say, I don't really have the tools to measure structural vibration. They did not help enough to avoid driving me crazy every time the compressor started.
What did solve the problem was moving the mounting to the ground.
This addressed the structural vibration issues to the point that I could no longer feel anything in my house.
Trying out the ‘deepseek-r1:7b’ model so far has been interesting. I’ve liked using it for exploring how to approach a problem in order for me to explore making an end decision.
Thankful for a Reddit post that SmartHQ appliances need a 2.4ghz network. The SmartHQ app isn't all that helpful with it's error messages.
I've had trouble with connections from my phone with WireGuard for a while. For maybe 2 years it wasn't a big enough issue to worry about. Finally looking, it seems to be an MTU issue that winds up causing problems for some sites, like Duck Duck Go. When I had that set as my default search engine it feels like the Internet is broken.
Setting the MTU to 1420 in my router WireGuard config seems to have fixed it! It was previously at 1200... for a reason I cannot recall.
The video is worth watching as well!
I've added some sanity checks my common Github Actions when I build Docker containers to run the -h
of a tool after building the image. I've a couple times been bitten by shared lib versions across build vs runtime base images. This at least verifies that the binary is in place and works!
- name: Build
uses: docker/build-push-action@v6
with:
platforms: ${{ inputs.docker_platforms }}
context: ${{ inputs.context }}
cache-from: type=gha
cache-to: type=gha,mode=max
load: true
tags: local-build:${{ github.sha }}
push: false
- name: Check Container
if: inputs.check_command != ''
run: |
docker run local-build:${{ github.sha }} ${{ inputs.check_command }}
I'm running into this issue with my local Docker Registry where things seem corrupted after a garbage collect. I though it was how I was deleting multi-arch images, but maybe not! I'm just disabling garbage collection for now in my system.
I'd love to try out Oriole DB for the decoupled storage for running Postgres. Keeping all data on S3/Minio would ease my DB management for disks and remove any reliance on a remote filesystems for Postgres.
The current limitation of While OrioleDB tables and materialized views are stored incrementally in the S3 bucket, the history is kept forever. There is currently no mechanism to safely remove the old data.
prevents me from running it fully though. I'll have to keep an eye on the dev / next release.
Haven’t heard of sigstore before. Need to keep this list around for next time I see someone recommending PGP
With 0.0.9 of declare_schema I'm starting to fail if migrations cannot be run. At the moment this tends to be room for improvement on implementation vs limitations in Postgres.
Current limitations:
I'm starting to get fairly confident in my usage of it. Going forward I hope to work more on docs and examples.
I've been meaning to try out this IdType
trait pattern. My SQLx usage so far somewhat benefits from different structs for to-write-data and read-data so I haven't quite gotten around to testing it out.
via
Some of my projects recently failed during lockfile updates with:
error "OPENSSL_API_COMPAT expresses an impossible API compatibility level"
The lockfile updates included openssl-sys-0.9.104
. I remember not really needing OpenSSL at all, leaning to use rust-tls
for the most part. This was the push I needed to figure out why OpenSSL was still being included.
reqwest
includes default-tls
as a default feature, which seems to be native-tls
a.k.a OpenSSL. Removing default features worked for my projects
reqwest = { version = "0.12.4", features = ["rustls-tls", "json"], default-features = false }
While upgrading to Postgres 17 I ran into a few problems in my setup:
pg_dump
as well, so backups stopped for a few dayspg_dump
for Postgres 17 (in some conditions? at least my setup) requires ALPN with TLS.From the release notes:
Allow TLS connections without requiring a network round-trip negotiation (Greg Stark, Heikki Linnakangas, Peter Eisentraut, Michael Paquier, Daniel Gustafsson)
This is enabled with the client-side option sslnegotiation=direct, requires ALPN, and only works on PostgreSQL 17 and later servers.
I run Traefik to proxy Postgres connections, taking advantage of TLS SNI so a single Postgres port can be opened in Traefik and it will route the connection to the appropriate Postgres instance. Traefik ... understandly... doesn't default to advertising that it supports postgresql
service over TLS. This must be done explicitly.
In Traefik I was setting logs such as tls: client requested unsupported application protocols ([postgresql])
From pg_dump
the log was SSL error: tlsv1 alert no application protocol "postgres"
Fixing this required configuring Traefik to explicitly say postgresql
was supported.
# Dynamic configuration
[tls.options]
[tls.options.default]
alpnProtocols = ["http/1.1", "h2", "postgresql"]
This as documented, is dynamic configuration. It must go in a dynamic config file declaration, not the static. In my static config I needed to add
[providers]
[providers.file]
directory = "/local/dynamic"
watch = true
Where /local/dynamic
is a dir that contains dynamic configuration. I was unable to get the alpnProtocols
set with Nomad dynamic configuration. I always ran into invalid node options: string
when Traefik tried to load the config from Consul. Maybe from this
Pleased again with SQLx tests while adding tests against Postgres to test migrations. Previously there wasn't any automated tests for "what does this lib pull from Postgres", I was doing that manually.
#[sqlx::test]
fn test_drop_foreign_key_constraint(pool: PgPool) {
crate::migrate_from_string(
r#"
CREATE TABLE items (id uuid NOT NULL, PRIMARY KEY(id));
CREATE TABLE test (id uuid, CONSTRAINT fk_id FOREIGN KEY(id) REFERENCES items(id))"#,
&pool,
)
.await
.expect("Setup");
let m = crate::generate_migrations_from_string(
r#"
CREATE TABLE items (id uuid NOT NULL, PRIMARY KEY(id));
CREATE TABLE test (id uuid)"#,
&pool,
)
.await
.expect("Migrate");
let alter = vec![r#"ALTER TABLE test DROP CONSTRAINT fk_id CASCADE"#];
assert_eq!(m, alter);
Really impressed with SQLx testing. Super simple to create tests that use the DB in a way that works well in dev and CI environments. I'm not using the migration feature but instead have my own setup to get the DB into the right state at the beginning of tests.
Half of how I use the web is with RSS. I can't really imagine having to go back to finding new things on various pages to see what's new.
Making a pr to SQLx to add Postgres lquery arrays. This took less time than I expected to try and fix. More time was spent wrangling my various projects to use a local sqlx
dependency.
Postgres ltree
has a wonderful ?
operator that will check an array of lquery
s. I plan to use this to allow filtering multiple labels in my expense tracker.
ltree ? lquery[] → boolean
lquery[] ? ltree → boolean
Does ltree match any lquery in array?
I'm trying to figure out if I can create Service Accounts in Kanidm and get a JWT that will work with pREST. pREST can be configured to use a .well-known
URL to pull a JWK. This would allow me to give a long-lived service account API key to each service and keep token generation out of my services.
It looks like not yet! But they seem to be aware of this use case.