2007 Vancouver PHP Conference - Part I - The Lerdorf Keynote note dump

I found the 2007 Vancouver PHP Conference was a blast. Here is a link/note dump of what I discovered from the sessions I attended:

Rasmus Lerdorf - Keynote
===================
First up: Rasmus Lerdorf, the inventor of PHP (and thereby a man to whom I owe at least one a day of my life for the improved productivity he was wrought).
He said, "Pay attention to the hormonal level of programming-- the high that users get when they succeed." People get a "high" from clicking with a website-- not
clicking on a site-- the with it-- connecting with it.
With Web 2.0 you don't need editors and content controllers-- you need to hit the right harmonic with your intended user base.
He then went onto show off Yahoo Pipes (http://pipes.yahoo.com/), a super-cool Web 2.0 application built to allow the creation and filtering of RSS aggregation.

I have long felt that we're drowning in data. We collect all of this information, it clogs our hard drives and we don't use it. Lerdorf observed that some sites that collect extra metrics and they can yield them in interesting ways : eg. "Camera Info" from Flickr-- then see if a camera consistently takes a given type of photo.

He described the Barcelona experiment, where photo sharing has been used to make collaborative 3d experience based on Geocoding and date stamps.

Lerdorf moved onto the topic of Performance. He laid out a theoretical problem: his employer, Yahoo, tasks him to build an application. At its first cut, it can satisfy 17 requests per second. Yahoo estimates this application will see 1700 requests for second. He could ask for 100 servers-- at Yahoo, you get 100 servers with very little trouble. Talk about rollout capacity. Asking for more servers feels like a cheat. You do not make scaling possible through multiplication-- you make it possible through performance improvements.

Performance Helpers
- http_load can be used to benchmark/pound sites
- vmstat1 - to see where the chokes are
- valgrind (emulator) is handy usrsbin/ apache x (non-forked)
- Using KCachegrind, you can run a callgraph, you can look where the chokepoints
- Some gotchas include Zend functions like, z_if
- Your callgraph should reveal opportunities:

  • problems
  • tweaks
  • changes to conf files
  • look at persistent vs. non-persisent connections. Persistent in the keynote example is faster.
  • you could have direct query problems - query cache is good for benchmarking but bad for production environments
  • db_opts is an opporunity for tuning

- APC cache-- compiles code and caches the applications into shared memory off of the drive. As soon as you get off of the bus, you will have a faster process. The downside: more ram is required and fewer applications are required. Otherwise, you will pack the RAM with cousin applications. - pecl install apc is a gotcha because of its shared memory usage.
- an include file is better than include cache.
- turn off stats-- full path includes-- work from shared memory
- you should cache configs otherwise, you're going off to load and parse a text file repeatedly
- cache db query results so that multiple hits on the same query draw from the same result set instead of processing the data.
- fewer, larger include files are better (is this Drupal's huge weakness? If so, I have a solution: find a way to build a whole application out of include files and save it into one large application with few or no include files.)

Here's a link to some of the screenshots I took shots of

Comments