iasraka.blogg.se

Pkware sucks
Pkware sucks









  1. #PKWARE SUCKS FOR FREE#
  2. #PKWARE SUCKS FULL#
  3. #PKWARE SUCKS SOFTWARE#

#PKWARE SUCKS SOFTWARE#

If the existing solution by IBM sucks, how about making a Request For Proposal and seeing if smaller software vendors could offer something better given that you are actually willing to pay for the work?

#PKWARE SUCKS FOR FREE#

This is a non-trivial amount of work that would require a lot of back-and-forth interaction and on-the-go requirement changes and I don't think it's entirely honest to ask someone to do this work for free in the name of cancer research (after all, you are not donating most of your paycheck to charities, are you?). Design the UI to be as efficient as possible for those scenarios. Understand why the current solution sucks (e.g. See if any domain-specific compression is needed.Ĥ. CPU bottlenecks (if using compression).ģ.

pkware sucks

Understand how many of the bottlenecks come from the network latency vs. Measure the existing performance on specific data sets.Ģ. Honestly speaking, a good way to solve this is not trivial. This may be incorrect.)Įek, the delay distributions you describe almost sound like they might be NP-complete to solve. I read the second half of that second paragraph as describing a network model simulation that "compiles" a particular routing graph/topology into a set-in-stone sequence of packet events that you then later analyze.? (I'm interpreting "the algorithm" is my target application code, and "synthetic time events" and "configured schedule" as hints at non-real-time pregeneration. (And also from a fail-fast perspective I've now gotten to wondering how much additional performance might be eked out of start-stop style traffic, and it's good to know it won't take long to find out whether the pursuit is worthless or not. Hearing I can get to 80% quickly is encouraging from a MVP viability point of view, thanks. I get the impression Aspera is built on a really simple solution, "pressure-adjusted" over a very long time to account for the fact that the real world isn't a vacuum :D (I keep trying to write ftd, but that is the florist service.) A lot of people using fdt and making the net worse for everybody else would be an unfortunate outcome. It leads me to wonder if fdt has any of this good-network-citizen capability. It sounds like people are getting good results with fdt. That sort of featurism is part of why Aspera can command crazy prices. An administrator can (literally) drag up and down the rates of ongoing transfers according to organizational priorities. So, you can set up where A always gets 70% when it is sending at all, and B, C, and D share whatever is left when they have anything to send, but all without ever slowing whatever TCP traffic is running. With some coordination between them, multiple senders can share the available bandwidth in any chosen proportion. It also backs off and shares bandwidth with other instances of itself or other well-behaved protocols. Interestingly (at least, I find it interesting) fasp can happily use up all the bandwidth the various TCP streams aren't using without affecting TCP rates at all. Others commented that opening lots of connections gets around some TCP bottlenecks, but that helps much only when the drop rate isn't too high (i.e.

pkware sucks

Torrents get the out-of-order delivery and the lower sensitivity to drops, but its blocks are too big. Recent improvements where routers tag packets to say "I was really, really tempted to drop this!" help some. Problem is that for untrusted peers, drops are the only trustworthy signal of congestion. :-) (Customers used to report 5% drop rates.) Drops and high RTT are devastating to traditional TCP throughput on high-packet-rate routes read about "slow-start" sometime, and do the math. If you think this wouldn't be able to go a thousand times faster than TCP, you have never tried moving a file to China or India over TCP.

#PKWARE SUCKS FULL#

Simple, in principle, but the wide Internet is full of surprises. So, the receiver reports a stream of transit time samples back to the sender, which feeds them into a predictor, which controls transmission rate. This should make an engineer think "control theory!", and did. The receiver knows, the sender needs to know, but the useful lifetime of the measurement is less than the transit time. Instead, it measures change in transit time. The protocol totally ignores drops, for flow control. Another advantage, until recently, was out-of-order delivery. An advantage it has over IETF protocols is that both ends trust one another. There is a lot of good science behind fasp. I did the encryption and early parallel work.











Pkware sucks