Viewed   60 times

Just wondering if anyone has information on what "costs" are associated with including a LARGE (600K or more) php file containing 100s of class files. Does it really make much difference in comparison to autoloading individual files that for instance searches across several directories before finding a match?

Would having APC caching on make this cost negligible?

 Answers

4

Basically, the cost of including one big file depend on your usecase. Let's say you have a large file with 200 classes.

If you only use 1 class, including the large file will be more expensive than including a small class file for that individual class.

If you use all 200 classes, including the large file will be significantly less expensive than including 200 small files.

Where the cutoff lies is really system dependent. I would imaging that it would be somewhere around the 50% mark (where if you're using less than 100 classes in any one request, autoload).

And using APC will likely shift the breakeven point closer to less classes (so without, 100 classes used might be the breakeven point, but with it might be at 50 classes used) since it makes the large single include much cheaper, but only lowers the overhead of each individual smaller include slightly.

The exact break-even points will be 100% system dependent (how fast is your disk I/O, how fast are your processors, how much memory, etc). So the only way to know for sure on your platform is to test.

However, more is at stake than raw performance. Maintainability will suffer with one large file since it's harder to work on multiple classes at the same time (tabs in an IDE become useless). I personally would keep all the classes in separate files and make my life as the developer easier rather than making one giant monstrosity of a file.

Now, if you have facebook traffic levels, it may be worth investigating further. But if you're not, I personally wouldn't worry about it...

Sunday, November 20, 2022
3

I have had a similar situation, but ultimately decided to implement with JSON and REST rather than the php-java bridge. The reduced complexity and greater re-use of services exposed as REST outweighed the need for better performance.

Thursday, October 6, 2022
2

I ran some timings on a 3ghz in-order PowerPC processor. On that architecture, a virtual function call costs 7 nanoseconds longer than a direct (non-virtual) function call.

So, not really worth worrying about the cost unless the function is something like a trivial Get()/Set() accessor, in which anything other than inline is kind of wasteful. A 7ns overhead on a function that inlines to 0.5ns is severe; a 7ns overhead on a function that takes 500ms to execute is meaningless.

The big cost of virtual functions isn't really the lookup of a function pointer in the vtable (that's usually just a single cycle), but that the indirect jump usually cannot be branch-predicted. This can cause a large pipeline bubble as the processor cannot fetch any instructions until the indirect jump (the call through the function pointer) has retired and a new instruction pointer computed. So, the cost of a virtual function call is much bigger than it might seem from looking at the assembly... but still only 7 nanoseconds.

Edit: Andrew, Not Sure, and others also raise the very good point that a virtual function call may cause an instruction cache miss: if you jump to a code address that is not in cache then the whole program comes to a dead halt while the instructions are fetched from main memory. This is always a significant stall: on Xenon, about 650 cycles (by my tests).

However this isn't a problem specific to virtual functions because even a direct function call will cause a miss if you jump to instructions that aren't in cache. What matters is whether the function has been run before recently (making it more likely to be in cache), and whether your architecture can predict static (not virtual) branches and fetch those instructions into cache ahead of time. My PPC does not, but maybe Intel's most recent hardware does.

My timings control for the influence of icache misses on execution (deliberately, since I was trying to examine the CPU pipeline in isolation), so they discount that cost.

Thursday, September 29, 2022
 
justind
 
4

There are three parts to the cost of new:

  • Allocating the memory (may not be required if it's a value type)
  • Running the constructor (depending on what you're doing)
  • Garbage collection cost (again, this may not apply if it's a value type, depending on context)

It's hard to use C# idiomatically without ever creating any new objects in your main code... although I dare say it's feasible by reusing objects as heavily as possible. Try to get hold of some real devices, and see how your game performs.

I'd certainly agree that micro-optimization like this is usually to be avoided in programming, but it's more likely to be appropriate for game loops than elsewhere - as obviously games are very sensitive to even small pauses. It can be quite hard to judge the cost of using more objects though, as it's spread out over time due to GC costs.

The allocator and garbage collector is pretty good in .NET, although it's likely to be simpler on a device (Windows Phone 7, I assume)? In particular, I'm not sure whether the Compact Framework CLR (which is the one WP7 uses) has a generational GC.

Thursday, September 29, 2022
 
5

No, the use statement does not provoke the the class be loaded (it does not even trigger an autoloader).

It just declares a short name for a class. I assume the cost in terms of CPU and RAM is in the order of a few CPU cycles and a few bytes.

Wednesday, September 21, 2022
Only authorized users can answer the search term. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :