@fractaledmind you're everywhere now if #SQLite is in the game, aren't you?
@fractaledmind you're everywhere now if #SQLite is in the game, aren't you?
Thank you!
In #SQLite: LIKE is case-insensitive by default for ASCII characters (like normal English letters A–Z).
tinysparql (https://tracker.gnome.org/) as an rdf foundation to #localfirst ... i'd watch a whole talk on that. this feels like a fun way to get away from the pain shoehorning sync engines into sql dbs.
one thing i certainly love about side projects is that it gets us out of the daily perpetual work rabbit whole with endless pre-defined constraints!
for this homelab/ #iott sensor stuff i finally got to play with #sqlite and i’m really loving it. since everything is also running locally (and i finally made sqlite indexes work) this home sensor page for my e-ink display loads *SO FAST* with just a couple of ms
If this message made it to you, it means the migration from #GoToSocial 0.18.3 to 0.19.0 went ok. The whole upgrade process, database migration stuff included, took only a couple of minutes; the #SQlite database is 1GB large. Of course, it runs on #NetBSD; using the official build.
#sqlite3 #database #sqlite I have a strong preference for solutions I can make work with pypika, since there are parts of the codebase that are not conducive to prepared statements and that's what I use for query building. I haven't looked into how to make the indexed by
and likely
/unlikely
work with pypika yet
Side note: I swear I did something earlier today that made the query complete in 40ms (still with the indexed by
)and I'm losing my marbles trying to recreate that
The #sqlite3 docs say that indexed by
should not be used for query design -- only to prevent regressions. But what is the alternative? I have a #database where most queries are similar to the following:
select count(distinct tags.file) from tags indexed by tags_tag_nocase join meta on tags.file = meta.file where meta.trash = 0 and (tags.tag = 'foo' or tags.tag like 'foo.%')
Without any indexed by
clause, the best I can get with this query is about 170ms. With the clause shown, it takes about 80ms. I've tried peppering in "likely" and "unlikely", but this is not an improvement
Some key facts about the data:
tags
, and 10K in meta
tags
can be expected to fulfill either of the tags.tag
clausesmeta
have meta.trash=0
@leogdion For #SQLite I also have Lighter, which I think is actually really good: https://github.com/Lighter-swift
And for Core Data I have replicated the #SwiftData macros: https://github.com/Data-swift/ManagedModels/
This by @mattiem might also be relevant to your article: https://mastodon.social/@mattiem/114359442297769327
I had the idea that writing to the SDCard for 250ms every few seconds was a bad idea
That is how long it took for SQLite to insert 1 row in 4 databases, roughly 80ms times 4
(I have four databases because the daily DB, the day before, the day after, and the huge all-time database)
So instead, I added a CSV buffer: incoming readings are appended to a flat file instead of immediately SQLite, and every 5 minutes, one of the nodes takes a hit to commit the CSV into the databases
The CSV append operation takes a few ms, usually 2ms but sometimes 5ms, so I'm happy about this part
The CSV to SQL operation takes about 8s when all 10 nodes are up (50 measurements to 4 databases), so that's a tad slower than I'd want but it happens once every 5 minutes so I guess that's ok?
(every SQL insert now takes about 40ms and there is probably still room for improvement)
And now I can keep a console with watch cat buffer.csv
which is great, and more readable than my server logs
Here is a sequence of four batches as a demo (first CSV field is a timestamp, second is the node ID, and the rest are air quality readings from the BME680)
@argv_minus_one
If it is actually written, yes. But #SQLite might detect there is no actual change and not declare the page dirty. And even if it is declared dirty, the row size will not change, so the B-Tree structure will be stable.
The page has to be read anyway, so both the disk cache and the SQLite page cache are primed. Except for hi-speed, hi-volume applications, I would not worry in advance. Premature optimization is the root of all evil.
The daily database files jumped to 2.6 MiB each since I started triple storing, as a lazy fix to my UTC issue
This is still much bigger than I’d like, but at least it’s a steady file size April 7 to April 18 included, so that’s good
And I can always reduce to 5 minute instead of every minute, or something
@argv_minus_one
CREATE TABLE T1 (
id INTEGER PRIMARY KEY NOT NULL
, tx TEXT UNIQUE
);
INSERT INTO T1 (tx)
VALUES ('abc'),('def'),('ghi');
SELECT * FROM T1;
id tx
-- ---
1 abc
2 def
3 ghi
INSERT INTO T1 (tx) VALUES ('def') ON CONFLICT (tx) DO
UPDATE SET id = id + 0
RETURNING id;
id
--
2
I am disappointed to find that, with #SQLite, an INSERT statement with both ON CONFLICT DO NOTHING and RETURNING clauses won't return anything if there is a conflict.
I have a table that's basically just for interning frequently-used strings. It has a numeric ID column and a unique text column, and I want to get the ID for a given string, inserting if necessary.
With this RETURNING behavior, that has to be done in two steps, not one.
SQLite Transactions and Virtual Tables
https://misfra.me/2025/sqlite-transactions-and-virtual-tables/
DIY AI Butler Is Simpler and More Useful Than Siri - [Geoffrey Litt] shows that getting an effective digital assistant that’s tailored ... - https://hackaday.com/2025/04/15/diy-ai-butler-is-simpler-and-more-useful-than-siri/ #artificialintelligence #digitalassistant #aiassistant #sqlite #diy #llm
A hackable AI assistant using a single SQLite table and a handful of cron jobs
#syncthing is working on a v2 release that will switch the internal database to #sqlite. Testers welcome!
https://forum.syncthing.net/t/syncthing-on-sqlite-help-test/23981
Running Matrix Synapse on SQLite and ran into a heavy performance problem on 1.128.0?
Well I did too and worked around it: https://slatecave.net/blog/synapse-sqlite-fix/
That’s what synapse gets for getting in my way while I’m trying to debug some other issue at 1:00 in the morning.
As a stress test I loaded 8M bookmarks into a little SQLite backed bookmark saving/sharing web app I'm building. The slowest part turned out to be the `SELECT count(1) FROM bookmarks` query that's used as part of the pagination UI. Not what I was expecting but known/expected behaviour from what I've read. It takes 400ms on Snapdragon X Elite CPU, which is fine in practice—I'd be surprised if anyone actually stored more than 100k items in it.