Optimal slow

Optimizing queries in Visual FoxPro is often attributed with increasing speed. Years ago we've seen with the discussion about an index on DELETED() that this is not a universal rule. Optimizing queries can be quite a tricky thing. Here's a situation to think about. You have two tables A and B. The query returns all records from A and about half the records of B:

SELECT * FROM A ;
JOIN B ON A.PK = B.FK ;
WHERE SomeCondition.

What seems to be an oxymoron at first is easily explained. When Visual FoxPro reads records in a loop, it never reads single records but a block of records. In the first case, Visual FoxPro has to scan sequentially through table B reading like a dozen of records with each read operation.

When you optimize the query, Visual FoxPro creates a map of hits. If there's one miss in a dozen hits, Visual FoxPro has to split the single read request of 12 records into two read requests of five and six records. The more records you filter out, the more fragments you have to read. In the end, you might end up with only 50% of the records, but require 600% percent more read operations to retrieve them.

Read operations across a network are expensive. They frequently map to a single network packet. Every additional packet adds overhead for protocol headers, adds delay due to network latency and adds workload to the network for routing and filtering packets. Congratulations!