sueden.social ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Eine Community für alle, die sich dem Süden hingezogen fühlen. Wir können alles außer Hochdeutsch.

Serverstatistik:

2 Tsd.
aktive Profile

#database

17 Beiträge15 Beteiligte1 Beitrag heute
Fortgeführter Thread

The #FBI has its own “Next Generation Identification” #biometric & #criminal-history #database program; the agency also has a #FacialRecognition apparatus capable of matching people against >640M photos—a database made up of #DriversLicense & #passport photos, as well as mug shots. The #SocialSecurity Admin keeps a master #earnings file, which contains the “individual earnings histories for each of the 350+ million SSNs that have been assigned to workers.”

#Trump#law#privacy

« Staffers from Musk’s Department of Government Efficiency are building a master #database to speed-up #immigration enforcement and #deportations by combining #sensitive data from across the federal government. The goal is to create a massive repository of data pulled from various agencies, according to sources familiar with the project who spoke on the condition of anonymity because they aren’t authorized to talk about it. The administration has previously sought to centralize information from a number of agencies, including the IRS, the Social Security Administration and Health and Human Services. »

edition.cnn.com/2025/04/25/pol

CNN · DOGE is building a master database for immigration enforcement, sources sayVon Priscilla Alvarez

Taking a backup of your #museum #database can be crucial in the current political climate. You don't know when you will be forced to delete what you once knew about the objects in your museum and the people who made them. They might be too queer or don't have the right skin color. Better rescue what you have NOW and store it in a secure place.
Today's blog post is about how to do that if you are using The Museum System (TMS).

world.museumsprojekte.de/lets-

world.museumsprojekte.deLet’s talk about data security: How to back-up your TMS database | Registrar Trek: The Next Generation
Mehr von RegistrarTrek

Bring your ggplot2 visualizations into 3D with rayshader! This extension adds powerful 3D plotting capabilities to R, making it easy to transform standard visuals into interactive and visually engaging data representations.

The visualizations shown here are taken from the rayshader package website: rayshader.com/

Click this link for detailed information: statisticsglobe.com/online-cou

Let’s talk about data security: How to back-up your TMS database

This is a step-by-step guide on how to backup your database if you are using a product of The Museum System (TMS) by Gallery Systems. If you use a different system it will work differently. Ask your vendor about it.

Step 1: Log into your database server and open Microsoft SQL Server Management Studio

You usually find it fastest if you start typing “SQL Server Management…” into the Windows search box.

Step 2: Enter your credentials

You will be prompted to enter your login credentials. If you are on NT Authentication usually all you have to do is click on “Connect”. If you have another form of authentication you will have to enter those login details. Your IT will tell you what to enter in that case.

Step 3: Find your database

In the tree hierarchy, open the folder “Databases” and find your database. It is usually called something like “TMS”. In my case it is called “Leer”.

Step 4: Navigate to the backup menu

Right click on your database, choose “Tasks” and then “Back Up…”

If that option is greyed out, you might not have the rights to do this. in which case you should talk to your IT so you get those rights.

Step 5: Chose your backup method

You will get to this screen:

Here you can choose if you want to do a full or differential backup (we talked about that here). You select that in the drop-down “Backup type”. We chose “Full” for this backup.

As a destination, usually “Disk” is fine, since you probably want to have the backup on your computer first and then transfer it to a cloud later.

Sometimes you will see a backup file already in the screen below that. If that’s the case, remove it, first.

Then click on “Add…”

Step 6: Add the file you want to back up to

By default, Microsoft suggest a rather cryptic sub-folder for your backups. I’d recommend adding a folder in a more prominent place that you can easily find and back up to there. You can see mine being “M:\Backups”.

Enter a file name for your backup. This can be the date you took it (Best Practice is to note the date in a year-month-day format so you can easily sort by date if you have multiple backup files) or a significant pointer to when you took it, for example “BeforeUpgradeTo995” if this is your backup before upgrading to a new version. Don’t forget to add “.bak” as a file ending, otherwise you might run into difficulties to restore it, later.

You can see that I called mine “AfterCI2025.bak” because it is the backup I took after adding a significant amount of information from our user conference.

Once you entered the name, hit “OK”.

Step 7: Take your backup

After that you just need to click on “OK” and your backup will be taken. If you have enough disc space in your chosen location, all is fine, otherwise it will throw an error message.

That was it. Wasn’t too hard, was it? And now you are good to go and bring your database to a safe location. I will do another post on how to compress the backup and restore it on another server.

Take your backup and take care!

Angela

After #TaylorOtwell suggested it, I'm releasing my super-charged #Laravel Seeder.

✅ Seed steps wrapped into transactions
✅ Continue incomplete seeding
✅ Push arguments to each seed step
✅ Retrocompatible with the old `run()`
... and a lot of more features.

github.com/Laragear/Populate

Populate your database with a supercharged, continuable seeder - Laragear/Populate
GitHubGitHub - Laragear/Populate: Populate your database with a supercharged, continuable seederPopulate your database with a supercharged, continuable seeder - Laragear/Populate
#PHP#Database#DB
Fortgeführter Thread

#sqlite3 #database #sqlite I have a strong preference for solutions I can make work with pypika, since there are parts of the codebase that are not conducive to prepared statements and that's what I use for query building. I haven't looked into how to make the indexed by and likely/unlikely work with pypika yet

Side note: I swear I did something earlier today that made the query complete in 40ms (still with the indexed by)and I'm losing my marbles trying to recreate that