[new blog post]
Use of Time in Distributed Databases (part 4): Synchronized clocks in production databases
muratbuffalo.blogspot.com/2025/01/use-...
Posts by Sharukh
It doesn't mean the outbound will have network issues but an additional wait is another network call that does not make much of a sense in the same network/DC
I still can't wrap my head around the
issues that could occur internally inside the data center/network for the secondary nodes , Something congestion/ packet drop will again increase the wait time. Relying on the master seems to be the better option as suggested.
Some one might be π for tunneling protocols πππ and #Alibaba thought this guy might need a Tunnel boring machine π€£π€£.
Interviewer: Can you explain this gap in your resume?
Network Engineer : Migrating everything to ipv6. π€
π
OMG ! , Someone put the effort for BGP on TLA
#tla+
conf.tlapl.us/2024-fm/slid...
* uses as a forwarding plane. π
Is it true that you guys run external bgp sessions inside a container ? π
The router just seems to be used for forwarding planes .
ohh please do repost it if you see it somewhere in bsky, based on the transfer rate π€£ that thing could be massive with a heat sink as large as a human leg.
Shit π, i have been going a bit off topic sorry for that.
since the majority of the az outage was happening via the third party networks, AWS kind of fixed it once in for all.
yeah that's a good place to start , but often times they still have to use overlay or dark fiber to secure connectivity.
For the last few years they kind of started ditching the provider overlay parts and started using the available dark fiber for a direct dwdm cross connectivity between AZs.
+ the connector other attenuation factor still exists π, I would really like to see service providers trenching more HCF rather than seeing it inside the DC
Based on some recent publication it will be averaging with a loss of 0.08Β±0.03 dB/km.
which is actually an improvement but i have never seen a rack in a DC thats being connected over by a patch cord of length more than 10 meters
π€£π€£
Nice π, I would be pretty much interested to see the next iteration of this with partitioning, so many ideas are coming to my mind with fixed bytes per record + byte range fetches with a commit point π
And funny enough when you reach the vpc part and explain the transport layer works a bit differently by proxying arp , there π§ will go ππ₯π
ohhh they would be like why the hell would we need go through this shit π
Yeah it's more intimidating , and then there are vendor specific hell holes πfor some of these devices.
But the cool part is that the way they conceived distributed systems in the 90's is pretty cool with routing and loop detection protocols. π
And then to get things more interesting we have overlay network protocols such as MPLS,GRE vxlan etc....
Yeah , it's like another swirl when it comes to networking, Starting with the essentials from socket/ transport protocols and then moving to Ethernet ,FIB, RIB, NAT , ROUTING .. etc π
I still haven't found a proper blog post covering all the above topics from a Dev perspective
Funny π€£ enough i just came up with it yesterday for a certain query use case , i also want to try dynamo db to act as a meta store for the object entries.
This is a really cool addition to warpstream. Now that S3 has CAS, it would be neat to see someone implement an open source schema registry on S3 with no control plane. Schema registries are surprisingly hard to run well at scale.