Posts claim photo shows Trump, Epstein and Bondi. They're almost right
Jul. 16th, 2025 11:04 pm![[syndicated profile]](https://www.dreamwidth.org/img/silk/identity/feed.png)
The original image, taken in 1997, shows the two men with Belgian model Ingrid Seynhaeve.
Trump didn't post 'MAGA agrees that 14 year-old girls are almost women anyway'
Jul. 16th, 2025 10:49 pm![[syndicated profile]](https://www.dreamwidth.org/img/silk/identity/feed.png)
Pressure has mounted for the release of more files related to sex offender Jeffrey Epstein, whom Trump once said was a "terrific guy."
002: Snow; P1Harmony, Enhypen; A Shared Peace (Keeho, Sunghoon)
Jul. 16th, 2025 04:51 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
![[community profile]](https://www.dreamwidth.org/img/silk/identity/community.png)
Creator:
andersenmom
Title: A Shared Peace
Rating: G
Type: Fic
Size/length/word count etc.: 500
Prompt: 002: Snow
Fandom/Ship: Keeho (P1Harmony) / Sunghoon (Enhypen)
Notes/Warnings: None
Summary Sunghoon needed peace, and he was willing to share the space with someone else looking for peace.
Find the table with the list of fics here
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Title: A Shared Peace
Rating: G
Type: Fic
Size/length/word count etc.: 500
Prompt: 002: Snow
Fandom/Ship: Keeho (P1Harmony) / Sunghoon (Enhypen)
Notes/Warnings: None
Summary Sunghoon needed peace, and he was willing to share the space with someone else looking for peace.
Find the table with the list of fics here
Fic: A Shared Peace
Jul. 16th, 2025 04:48 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Title: A Shared Peace
Rating: G
Type: Fic
Size/length/word count etc.: 500
Prompt: 002: Snow
Fandom/Ship: Keeho (P1Harmony) / Sunghoon (Enhypen)
Notes/Warnings: None
Summary Sunghoon needed peace, and he was willing to share the space with someone else looking for peace.
Find the table with the list of fics here
Rating: G
Type: Fic
Size/length/word count etc.: 500
Prompt: 002: Snow
Fandom/Ship: Keeho (P1Harmony) / Sunghoon (Enhypen)
Notes/Warnings: None
Summary Sunghoon needed peace, and he was willing to share the space with someone else looking for peace.
Find the table with the list of fics here
July Theme - Hobbies and crafts
Jul. 16th, 2025 11:44 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
![[community profile]](https://www.dreamwidth.org/img/silk/identity/community.png)
Keeping up with our Hobbies and Crafts monthly theme. How are things going? Are you managing to find things to work on that in some way relates to the theme or are you tackling your own thing this month?
What have you found to be doing?
What have you found to be doing?
[ SECRET POST #6767 ]
Jul. 16th, 2025 06:36 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
![[community profile]](https://www.dreamwidth.org/img/silk/identity/community.png)
⌈ Secret Post #6767 ⌋
Warning: Some secrets are NOT worksafe and may contain SPOILERS.
01.

( More! )
Notes:
Secrets Left to Post: 01 pages, 16 secrets from Secret Submission Post #968.
Secrets Not Posted: [ 0 - broken links ], [ 0 - not!secrets ], [ 0 - not!fandom ], [ 0 - too big ], [ 0 - repeat ].
Current Secret Submissions Post: here.
Suggestions, comments, and concerns should go here.
Kevin Spacey posted about releasing Epstein files?
Jul. 16th, 2025 10:22 pm![[syndicated profile]](https://www.dreamwidth.org/img/silk/identity/feed.png)
The disgraced actor made the post around a week after the Justice Department released a memo saying it found no evidence of an Epstein "client list."
Texas floods: Myriad of misleading claims besiege Tom Brady in aftermath
Jul. 16th, 2025 10:14 pm![[syndicated profile]](https://www.dreamwidth.org/img/silk/identity/feed.png)
In the aftermath of the deadly July 2025 floods, users shared AI-generated images to promote misinformation about the seven-time Super Bowl champion.
Disproving claim Rachel Maddow 'shattered' Stephen Miller's reputation during TV interview
Jul. 16th, 2025 09:20 pm![[syndicated profile]](https://www.dreamwidth.org/img/silk/identity/feed.png)
According to social media posts, "Washington" scrambled to do damage control over the alleged live television interview.
after a long hiatus
Jul. 16th, 2025 04:50 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Ninefox Gambit (comic) and Candle Arc (DIY 2D animation short in preproduction).
(
eller, so sorry for the delay! I had to debug a bunch of WordPress and I've been preoccupied with work/family. Of course, now I'm preoccupied with composition/orchestration assignments.)
(Yes, this is supposed to be a public post.)
(
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
(Yes, this is supposed to be a public post.)
some good things
Jul. 16th, 2025 10:49 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
- Really enjoying the redcurrant cake I finally managed to make the other evening.
- First of the clothes-for-me from the latest Oxfam order showed up and is in fact more or less Perfect, hurrah. (Cargo shorts. Two pairs of linen cargo trousers due tomorrow...)
- Mulberries!
ewt informed me that they were starting to come ready, so I took a detour via the local tree and did indeed manage to munch a token handful.
- I made a batch of mostly-white-some-rye caraway-and-poppyseed bread, and it goes spectacularly well with the cherry plum and vanilla jam a friend gave me at the weekend. I have been having some Very Happy Breakfasts.
- My extremely late-into-the-ground squash are starting to produce female flowers!
- And I found some more lurking long bamboo to install for the late-sown beans to maybe make their way up.
- AND I might actually break even on peas-for-sowing-next-year if the second flush on one of the plants does what it's threatening to, which I would be extremely excited about because I had been mildly regretting eating (instead of saving for seed) the handful we did eat, when my original intention had in fact been to Just Save Seed this year... (... but they were very tasty.)
- We are reading Hyperbole and a Half (the book) together a chapter at a time! They are an excellent short Shared Activity.
- I have this evening spent a pleasant ten minutes playing around with the dragons game and enjoying getting some very pretty possible dragons out of it. Yes good.
- Read about three elephants graduating to the Reintegration Unit run by the Sheldrick Trust and cried a lot. (Also at the accompanying video.) (Good crying.)
Originality is the art of concealing your source
Jul. 16th, 2025 02:35 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
![[community profile]](https://www.dreamwidth.org/img/silk/identity/community.png)
Late last year I wrote this. Since it's on-topic, I'd like to see what everyone here thinks...
Search engines used to take in a question and then direct the user to some external data source most relevant to the answer.
Generative AI in speech, text, and images is a way of ingesting large amounts of information specific to a domain and then regurgitating synthesized answers to questions posed about that information. This is basically the next evolutionary step of a search engine. The main difference is, the answer is provided by an in-house synthesis of the external data, rather than a simple redirect to the external data.
This is being implemented right now on the Google search page, for example. Calling it a search page is now inaccurate. Google vacuums up information from millions of websites, then regurgitates an answer to your query directly. You never perform a search. You never visit any of the websites the information was derived from. You are never aware of them, except in the case where Google is paid to advertise one to you.
If all those other pages didn’t exist, Google's generative AI answer would be useless trash. But those pages exist, and Google has absorbed them. In return, Google gives them ... absolutely nothing, but still manages to stand between you and them, redirecting you to somewhere else, or ideally, keeping you on Google permanently. It's convenient for you, profitable for Google, and slow starvation for every provider of content or information on the internet. Since its beginning as a search engine, Google has gone from middleman, to broker, to consultant. Instead of skimming some profit in a transaction between you and someone else, Google now does the entire transaction, and pockets the whole amount.
Reproducing another's work without compensation is already illegal, and has been for a long time. The only way this new process stays legal is if the work it ingests is sufficiently large or diluted enough that the regurgitated output looks different enough (to a human) that it does not resemble a mere copy, but is an interpretation or reconstruction. There is a threshold below which any reasonable author or editor would declare plagiarism, and human editors and authors have collectively learned that threshold for centuries. Pass that threshold, and your generative output is no longer plagiarism. It's legally untouchable.
An entity could ingest every jazz performance given by Mavis Staples, then churn out a thousand albums "in the style" of Mavis Staples, and would owe Mavis Staples nothing, while at the same time reducing the value of her discography to almost nothing. An entity could do the same for television shows, for novels - even non-fiction novels - even academic papers and scientific research - and owe the creators of these works nothing, even if they leveraged infinite regurgitated variations of the source material for their own purposes internally. Ingestion and regurgitation by generative AI is, at its core, doing for information what the mafia needs to do with money to hide it from the law: It is information laundering.
Imitation is the sincerest form of flattery, and there are often ways to leverage imitators of one's work to gain recognition or value for oneself. These all rely on the original author being able to participate in the same marketplace that the imitators are helping to grow. But what if the original author is shut out? What if the imitators have an incentive to pretend that the original author doesn't exist?
Obscuring the original source of any potential output is the essential new trait that generative AI brings to the table. Wait, that needs better emphasis: The WHOLE POINT of generative AI, as far as for-profit industry is concerned, is that it obscures original sources while still leveraging their content. It is, at long last, a legal shortcut through the ethical problems of copyright infringement, licensing, plagiarism, and piracy -- for those sufficiently powerful enough already to wield it. It is the Holy Grail for media giants. Any entity that can buy enough computing power can now engage in an entirely legal version of exactly what private citizens, authors, musicians, professors, lawyers, etc. are discouraged or even prohibited from doing. ... A prohibition that all those individuals collectively rely on to make a living from their work.
The motivation to obscure is subtle, but real. Any time an entity provides a clear reference to an individual external source, it is exposing itself to the need to reach some kind of legal or commercial or at the very least ethical negotiation with that source. That's never in their financial interest. Whether it's entertainment media, engineering plans, historical records, observational data, or even just a billion chat room conversations, there are licensing and privacy strings attached. But, launder all of that through a generative training set, and suddenly it's ... "Source material? What source material? There's no source material detectable in all these numbers. We dare you to prove otherwise." Perhaps you could hire a forensic investigator and a lawyer and subpoena their access logs, if they were dumb enough to keep any.
An obvious consequence of this is, to stay powerful or become more powerful in the information space, these entities must deliberately work towards the appearance of "originality" while at the same time absorbing external data, which means increasing the obscurity of their source material. In other words, they must endorse and expand a realm of information where the provenance of any one fact, any measured number, any chain of reasoning that leads outside their doors, cannot be established. The only exceptions allowable are those that do not threaten their profit stream, e.g. references to publicly available data. For everything else, it's better if they are the authority, and if you see them as such. If you want to push beyond the veil and examine their reasoning or references, you will get lost in a generative hall of mirrors. Ask an AI to explain how it reached some conclusion, and it will construct a plausible-looking response to your request, fresh from its data stores. The result isn't what you wanted. It's more akin to asking a child to explain why she didn't do her homework, and getting back an outrageous story constructed in the moment. That may seem unfair since generative AI does not actually try to deceive unless it's been trained to. But the point is, ... if it doesn't know, how could you?
This economic model has already proven to be ridiculously profitable for companies like OpenAI, Google, Adobe, et cetera. They devour information at near zero cost, create a massive bowl of generative AI stew, and rent you a spoon. Where would your search for knowledge have taken you, if not to them? Where would that money in your subscription fee have gone, if not to them? It's in the interest of those companies that you be prevented from knowing. Your dependency on them grows. The health of the information marketplace and the cultural landscape declines. Welcome to the information mafia.
Postscript:
Is there any way to avert this future? Should we?
We thoroughly regulate the form of machines that transport humans, in order to save lives. We regulate the content of public school curriculums according to well-established laws, for example those covering the establishment clause of the first amendment. So regulating devices and regulating information content is something we're used to doing.
But now there is a machine that can ingest a copyrighted work, and spit out a derivation of that work that leverages the content, while also completely concealing the act of ingesting. How do you enforce a law against something that you can never prove happened?
Search engines used to take in a question and then direct the user to some external data source most relevant to the answer.
Generative AI in speech, text, and images is a way of ingesting large amounts of information specific to a domain and then regurgitating synthesized answers to questions posed about that information. This is basically the next evolutionary step of a search engine. The main difference is, the answer is provided by an in-house synthesis of the external data, rather than a simple redirect to the external data.
This is being implemented right now on the Google search page, for example. Calling it a search page is now inaccurate. Google vacuums up information from millions of websites, then regurgitates an answer to your query directly. You never perform a search. You never visit any of the websites the information was derived from. You are never aware of them, except in the case where Google is paid to advertise one to you.
If all those other pages didn’t exist, Google's generative AI answer would be useless trash. But those pages exist, and Google has absorbed them. In return, Google gives them ... absolutely nothing, but still manages to stand between you and them, redirecting you to somewhere else, or ideally, keeping you on Google permanently. It's convenient for you, profitable for Google, and slow starvation for every provider of content or information on the internet. Since its beginning as a search engine, Google has gone from middleman, to broker, to consultant. Instead of skimming some profit in a transaction between you and someone else, Google now does the entire transaction, and pockets the whole amount.

An entity could ingest every jazz performance given by Mavis Staples, then churn out a thousand albums "in the style" of Mavis Staples, and would owe Mavis Staples nothing, while at the same time reducing the value of her discography to almost nothing. An entity could do the same for television shows, for novels - even non-fiction novels - even academic papers and scientific research - and owe the creators of these works nothing, even if they leveraged infinite regurgitated variations of the source material for their own purposes internally. Ingestion and regurgitation by generative AI is, at its core, doing for information what the mafia needs to do with money to hide it from the law: It is information laundering.
Imitation is the sincerest form of flattery, and there are often ways to leverage imitators of one's work to gain recognition or value for oneself. These all rely on the original author being able to participate in the same marketplace that the imitators are helping to grow. But what if the original author is shut out? What if the imitators have an incentive to pretend that the original author doesn't exist?
Obscuring the original source of any potential output is the essential new trait that generative AI brings to the table. Wait, that needs better emphasis: The WHOLE POINT of generative AI, as far as for-profit industry is concerned, is that it obscures original sources while still leveraging their content. It is, at long last, a legal shortcut through the ethical problems of copyright infringement, licensing, plagiarism, and piracy -- for those sufficiently powerful enough already to wield it. It is the Holy Grail for media giants. Any entity that can buy enough computing power can now engage in an entirely legal version of exactly what private citizens, authors, musicians, professors, lawyers, etc. are discouraged or even prohibited from doing. ... A prohibition that all those individuals collectively rely on to make a living from their work.
The motivation to obscure is subtle, but real. Any time an entity provides a clear reference to an individual external source, it is exposing itself to the need to reach some kind of legal or commercial or at the very least ethical negotiation with that source. That's never in their financial interest. Whether it's entertainment media, engineering plans, historical records, observational data, or even just a billion chat room conversations, there are licensing and privacy strings attached. But, launder all of that through a generative training set, and suddenly it's ... "Source material? What source material? There's no source material detectable in all these numbers. We dare you to prove otherwise." Perhaps you could hire a forensic investigator and a lawyer and subpoena their access logs, if they were dumb enough to keep any.
An obvious consequence of this is, to stay powerful or become more powerful in the information space, these entities must deliberately work towards the appearance of "originality" while at the same time absorbing external data, which means increasing the obscurity of their source material. In other words, they must endorse and expand a realm of information where the provenance of any one fact, any measured number, any chain of reasoning that leads outside their doors, cannot be established. The only exceptions allowable are those that do not threaten their profit stream, e.g. references to publicly available data. For everything else, it's better if they are the authority, and if you see them as such. If you want to push beyond the veil and examine their reasoning or references, you will get lost in a generative hall of mirrors. Ask an AI to explain how it reached some conclusion, and it will construct a plausible-looking response to your request, fresh from its data stores. The result isn't what you wanted. It's more akin to asking a child to explain why she didn't do her homework, and getting back an outrageous story constructed in the moment. That may seem unfair since generative AI does not actually try to deceive unless it's been trained to. But the point is, ... if it doesn't know, how could you?
This economic model has already proven to be ridiculously profitable for companies like OpenAI, Google, Adobe, et cetera. They devour information at near zero cost, create a massive bowl of generative AI stew, and rent you a spoon. Where would your search for knowledge have taken you, if not to them? Where would that money in your subscription fee have gone, if not to them? It's in the interest of those companies that you be prevented from knowing. Your dependency on them grows. The health of the information marketplace and the cultural landscape declines. Welcome to the information mafia.
Postscript:
Is there any way to avert this future? Should we?
We thoroughly regulate the form of machines that transport humans, in order to save lives. We regulate the content of public school curriculums according to well-established laws, for example those covering the establishment clause of the first amendment. So regulating devices and regulating information content is something we're used to doing.
But now there is a machine that can ingest a copyrighted work, and spit out a derivation of that work that leverages the content, while also completely concealing the act of ingesting. How do you enforce a law against something that you can never prove happened?
Word: Cavil
Jul. 16th, 2025 04:53 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Wednesday's word is...
...cavil.
[kav-uhl]
1. to raise irritating and trivial objections; find fault with unnecessarily (usually followed by at or about).
--
I found this in Murder in Zanzibar by M. M. Kaye.
It’s a pity that your taste in newspapers didn’t run to a smaller sized sheet, but who am I to carp and c-cavil?
...cavil.
[kav-uhl]
1. to raise irritating and trivial objections; find fault with unnecessarily (usually followed by at or about).
--
I found this in Murder in Zanzibar by M. M. Kaye.
It’s a pity that your taste in newspapers didn’t run to a smaller sized sheet, but who am I to carp and c-cavil?
Commission me?
Jul. 16th, 2025 03:41 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Hey all,
Phone started repeatedly throwing bad errors, overheating, etc. I have a replacement on the way (god I hope I did not get ripped off cheap as it was) but if anyone wants to ask for a story in exchange for donations, my ko-fi is here and you can DM or leave a screened comment.
Normal rate is 100 words per dollar. I do typically make a limit of 5K words, but that's negotiable for the right idea.
(This on top of being shorted pay that is STILL not sorted out has made for depression icing on the depressed cake.)
Phone started repeatedly throwing bad errors, overheating, etc. I have a replacement on the way (god I hope I did not get ripped off cheap as it was) but if anyone wants to ask for a story in exchange for donations, my ko-fi is here and you can DM or leave a screened comment.
Normal rate is 100 words per dollar. I do typically make a limit of 5K words, but that's negotiable for the right idea.
(This on top of being shorted pay that is STILL not sorted out has made for depression icing on the depressed cake.)