Wednesday, January 12, 2011

historical mozilla-central git repository

A number of people use git to work with the mozilla hg tree. In the past I've wanted the entire history as a git repo so I converted the old CVS repository to git and put it up on people.mozilla.org.

You can set it up as follows:

git clone http://people.mozilla.org/~jmuizelaar/mozilla-cvs-history.git
git clone git://bluishcoder.co.nz/git/mozilla-central.git

cd mozilla-central/.git/objects/pack
# set up symbol links to cvs-history pack files
ln -s ../../../../mozilla-cvs-history/.git/objects/pack/pack-5b5d604ab48cf7bc2a6b4495292fa8700a987c5f.pack .
ln -s ../../../../mozilla-cvs-history/.git/objects/pack/pack-5b5d604ab48cf7bc2a6b4495292fa8700a987c5f.idx .
cd ../../

# add a graft from the last revision in the mozilla-central repo
# to the first revision in the cvs-history
echo 2514a423aca5d1273a842918589e44038d046a51 3229d5d8b7f8376cfb7936e7be810635a14a486b > info/grafts

Now you have a git repository containing all of the history. You can update the mozilla-central repository as you normally would. The conversion isn't perfect, but it's been good enough to have working blame back into cvs time.

Tuesday, January 11, 2011

Firefox acceleration prefs changing

I just landed a changeset that changes the names of the layer acceleration prefs in Firefox.

The old prefs were:
layers.accelerate-all
layers.accelerate-none

The new prefs are:
layers.acceleration.disabled
layers.acceleration.force-enabled

layers.accelerate-all previously defaulted to 'true' on Windows and OS X. Which meant that there was no easy way to force layer acceleration on if your card had been blacklisted for some reason. The new prefs allow the blacklist to be overwritten. The old prefs are not being migrated over to the new names. If you have a problem with the defaults, please file bugs.

Saturday, January 8, 2011

Trying out AVX

Intel's new Sandy Bridge CPUs came out this week and they support a new set of instructions called AVX. The AVX instructions are a much bigger change than the usual SSE revisions in the past few micro-architectures. First of all, they double the 128 bit SSE registers to 256 bits. Second, they introduce an entirely new instruction encoding. The new encoding switches from 2 operand instructions to 3 operand instructions allowing the destination register to be different than the source registers. For example:
  addps r0, r1       # (r0 = r0 + r1)
         vs.
  vaddps r0, r1, r2  # (r0 = r1 + r2)
This new encoding is not only used for the new 256 bit instructions, but also for the 128 bit AVX versions of all the old SSE instructions. This means that existing SSE code can improved without requiring a switch to 256 bit registers. Finally, AVX introduces some new data movement instructions, which should help improve code efficiency.

I decided to see what kind of performance difference using AVX could make in qcms with minimal effort. If you use SSE compiler intrinsics, like qcms does, switching to AVX is very easy; simply recompile with -mavx. In addition to using -mavx, I also took advantage of some of the new data movement instructions by replacing the following:
  vec_r = _mm_load_ss(r);
  vec_r = _mm_shuffle_ps(vec_r, vec_r, 0);

with the the new vbroadcastss instruction:
  vec_r = _mm_broadcast(r);
Overall, this change reduces the inner loop by 3 instructions.

The performance results were positive, but not what I expected. Here's what the timings were:
SSE2:75798 usecs
AVX (-mavx):69687 usecs
AVX w/ vbroadcastss:72917 usecs
Switching to the AVX encoding improves performance by more than I expected: nearly 10%. But adding the new vbroadcastss instruction, in addition to the AVX encoding, not only doesn't help, but actually makes things worse. I tried analyzing the code with the Intel Architecture Code Analyzer, but the analyzer also thought that using vbroadcastss should be faster. If anyone has any ideas why vbroadcastss would be slower, I'd love to hear them.

Despite this weird performance problem, AVX seems like a good step forward and should provide good opportunities for improving performance beyond what's possible with SSE. For more information, check out this presentation which gives a good overview of how to take advantage AVX.