cross-posted from: https://lemmy.g97.top/post/761
cross-posted from: https://lemmy.g97.top/post/723
Hi! I spawned my own instance of lemmy on my server and I discovered new things about how lemmy and federation works, and I have a lot of doubt. I don’t know exactly if those doubts are problems of my implementation of if they are normal, so!
- My main account is on lemmy.world and I see that new posts from communities I follow show up before on lemmy.world and then on my instance. Is it normal?
- With comments happens the same thing and they are slower to “sync”. Why?
- If a community has been never discovered from the search form with the full format !community@instance, it will never appear on my instance. This means that is not possible to search for an argument (i.e. steam deck) and finding all the posts and communities about it. Is this normal or a feature that we/you would like to see in future/is adaptable to the concept of the fediverse? Because if I am on a big instance with a lot of users maybe I found that specific community or post, but on smaller instances like mine it will never appear If I don’t know the exact name.
- I created a community on my instance and subscribed it from lemmy.world but I don’t see any post nor are they in sync. Why? https://lemmy.world/c/announcements@lemmy.g97.top vs https://lemmy.g97.top/c/announcements.
- From my instance I am unable to follow lemmy.ml communities (they are pending, usually on lemmy.world the pending status is faster)
- I am unable to search for communities on Kbin.social, and when I try I see this log message of type “couldnt_find_object: error decoding response body: missing field
properties
at line 1 column 206” from my docker instance:
2023-06-20T22:02:16.056226139Z 2023-06-20T22:02:16.055937Z ERROR HTTP request{http.method=GET http.scheme="https" http.host=lemmy.g97.top http.target=/api/v3/ws otel.kind="server" request_id=8211e6a4-2b30-4f8c-98b3-d93843a0e293 http.status_code=101 otel.status_code="OK"}: lemmy_server::api_routes_websocket: couldnt_find_object: error decoding response body: missing field
propertiesat line 1 column 206 2023-06-20T22:02:16.056276976Z 0: lemmy_apub::fetcher::search::search_query_to_object_id 2023-06-20T22:02:16.056286500Z at crates/apub/src/fetcher/search.rs:17 2023-06-20T22:02:16.056293804Z 1: lemmy_apub::api::resolve_object::perform 2023-06-20T22:02:16.056300316Z with self=ResolveObject { q: "!leagueoflinux@kbin.social", auth: Some(Sensitive) } 2023-06-20T22:02:16.056307712Z at crates/apub/src/api/resolve_object.rs:21 2023-06-20T22:02:16.056314152Z 2: lemmy_server::root_span_builder::HTTP request 2023-06-20T22:02:16.056320693Z with http.method=GET http.scheme="https" http.host=lemmy.g97.top http.target=/api/v3/ws otel.kind="server" request_id=8211e6a4-2b30-4f8c-98b3-d93843a0e293 http.status_code=101 otel.status_code="OK" 2023-06-20T22:02:16.056351870Z at src/root_span_builder.rs:16
- I have a lot of warnings in the lemmy log of type “Error encountered while processing the incoming HTTP request: lemmy_server::root_span_builder: Header is expired” such as:
2023-06-20T21:58:12.484449111Z 2023-06-20T21:58:12.484275Z WARN Error encountered while processing the incoming HTTP request: lemmy_server::root_span_builder: Header is expired 2023-06-20T21:58:12.484510012Z 0: lemmy_server::root_span_builder::HTTP request 2023-06-20T21:58:12.484517559Z with http.method=POST http.scheme="https" http.host=lemmy.g97.top http.target=/inbox otel.kind="server" request_id=caf194c5-cac3-4c37-a29c-577d65deb050 http.status_code=400 otel.status_code="OK" 2023-06-20T21:58:12.484525578Z at src/root_span_builder.rs:16 2023-06-20T21:58:12.484530286Z LemmyError { message: None, inner: Header is expired, context: "SpanTrace" }
I have more questions/doubt but for now this is enough I think! Thank you!
Header is expired issue is big part of the current federation problem. And whether you know it like it or not, you’ve just made the matter worse. You’re not to blame though. I’ve done it too, along with many other people self hosting our own instance.
The way federation currently works is each write action must be federated outwards to each federated instance. A comment reply, such as this one, must be federated outwards by the hosting instance. An instance receiving a federation event must also discard messages that are older than 10 seconds.
Here lies the problem… popular instances like lemmy.world and lemmy.ml has thousands of users, and thousands of federated servers. Yesterday, when I checked, lemmy.world had 3600 users per day and 2200+ federated servers. If there’s a really popular post on a very popular community, and 10% of the users comments on it? Lemmy.world server must send 360x2200 = 700K+ outbound federation event messages. Each one of these are sent over HTTPS via TCP so they can’t send all of them at the same time, and the messages are put into a queue where the federation workers will send them out. Each worker will send the message and because HTTPS is over TCP, it is not fire and forget, the worker must wait for acknowledgement for the packets. If an instance owner gets bored because they’re not getting all the messages and shuts down? Now the worker needs to wait for that to error out and thereby delaying messages further down the queue. If it had to wait more than 10 seconds? Everyone down the queue will just get expired messages because the event is already outdated.
So now you’ve already created an instance and adding to the load of the network, just like me, what can you do? Keep your server online in a fast data center. Use Cloudflare to reduce latency. That way at least your server isn’t going to introduce too much latency to other servers down the queue. Hopefully the devs figure out something to make the process better. I’ve put in a more scalable notification fleet architecture change on GitHub already. Lets see if they can implement that or change other requirements on the system.
Can you link your proposed change? I am interested
EDIT: for what concerns the queue stuff and the 10 second expiration. Is this part of the activityPub protocol or it has been choose by Lemmy’s devs during the implementation of the protocol?
As someone who has just enough knowledge to know how big the task of creating a performant way to propagate updates through the federation is, I really hope there are some smart people working on a solution. That is the biggest advantage reddit has over lemmy: Known and centralized hardware standards. Lemmy needs to find a way to make propagation work when half of all instances are hosted at home on consumer-grade hardware.
Isn’t there a mechanism to remove timing out servers? Or a way to unregister your instance ? Otherwise the model could never scale properly as servers get retired every now and then, even within the same instance.
If there is an option to adjust/disable it, I wasn’t able to find it.
I can only answer a couple.
- This is by design, due to how federation works. Federation is literally just:
- Instance A requests information from Instance B
- Instance B responds to the request and sends it back
- Instance A follows Instance B
- Instance B now federates all future posts to Instance A
There’s nothing more complicated to it, but it does mean that instances cannot know about other instances without being told, as there is no central location that instances connect to in order to find out about all other instances.
-
Only posts after subscribing are federated to an instance, it doesn’t backfill. An option for admins of an instance to request a backfill would not be a bad option though, but as time goes on, backfilling an entire community could take too much data on instances.
-
Issue with Lemmy.ml, although when you see Subscribe Pending, you tend to still see things in your feed.
So if you’re using a new instance all old posts are not visible? Really? Can it not load them on demand? Or am I misunderstanding?
I can’t even see this post on my selfhosted instance, so that’s fun.
With regard to #6, you apparently have to search for the URL of the Kbin magazine (e.g. https://kbin.social/m/RedditMigration) and then you can subscribe as normal from that point on.
And yeah, the discovery and comments thing is super annoying. There will be completely different conversations based on where you’re viewing the post, which kind of kills the idea of all these instances federating in the first place.
That being said, it appears to be feeds coming from lemmy.world and lemmy.ml that have the majority of the problems with comment/post sync. Unfortunately that’s the selfhosted community.
- Yes, because posts/comments are created there and then federated to subscribed instances, which happens on a queue. Depending how popular that community is, there’s probably a lot of other instances ahead of you in the queue, but lemmy.world is also a huge instance in general so they might be under al ot of load.
- Same reason as above, federation is asynchronous and happens on a queue.
- In theory you could probably write a bot that just auto-subscribes to communities in other instances, but you’re going to have a LOT of traffic going through to your instance and you’ll basically need it to be as powerful as those instances. Sending you only the data you’ve subscribed to receive is how all federated services work, an instance just can’t reasonably get everything. However I’ve put some thought into this and I think a way to browse other communities without subscribing to them would be a game changer.
- This could be a lot of things unfortunately. Have you tried making new posts?
- Same on my end, I think lemmy.ml is waaay overloaded at this point and I think it might also be behind CloudFlare which complicates the federation process.
- Seems like a problem on the kbin end. You might want to create an issue on the kbin’s codeberg.
- You’ll probably want to post that in one of the lemmy communities or on its issue list (make sure to search first).
Thank you for the detailed response! I’ll open the issues for 6. And 7.
Can confirm that all of the above applies to my instance as well
Well, this is good. At least we are sure almost everything is good on our side
Hoping someone found a solution for at least the slow comment updating. Especially older posts (48 hours) dont seem to sync properly