Monday, October 25, 2010

Ubuntu Karmic Koala and VLC

It's been some time since I last posted here .. nothing happened to me that's interesting for a broader audience, it seems ... Well, I found something.

On my notebook, I am forced to use nothing more current than ubuntu Karmic Koala (9.04), because KMS+old intel graphics doesn't really work - see more below. Luckily, many of the packages are still being updated (including firefox), but one is missing for me ... a current version of vlc aka video lan client. So, I had to compile it. Only took about 30mins of CPU time ;)

There are no packages on launchpad for vlc/karmic, so I offer them for download, if anyone is interested. This is not an apt repository, but just a folder with .deb files. I compiled them from the lucid sources from Ferramosca Roberto's LffL VLC

To install, download the files from, and use the following to install:

sudo dpkg -i libmtp8_1.0.2-1ubuntu1_i386.deb
sudo dpkg -i vlc_1.1.4.1-1~lffl~lucid~ppa1_i386.deb \
 vlc-data_1.1.4.1-1~lffl~lucid~ppa1_all.deb \
 libvlc5_1.1.4.1-1~lffl~lucid~ppa1_i386.deb \
 libvlccore4_1.1.4.1-1~lffl~lucid~ppa1_i386.deb \
These are the packages you need, at least - some of the plugins may be handy, the -dev packages you probably won't need.
Oh, they are called lucid because I compiled them from the ppa launchpad lucid dsc sources ... I should fix this, and the maintainer name also ;)

Just for information - my experiences with ubuntu, old intel graphics (82852/855GM), kernel based mode setting (KMS) and XVideo acceleration were quite free of successes.
Depending on the distribution and the current update state,
* KMS+XVideo is not supported (and not planned to be supported, I believe)
* nomodeset+XVideo either doesn't work, or crashes the X11 server
My experiences with opensuse were similar, so, it appears to be something the old hardware has to live with (IBM Thinkpad X40).

Saturday, December 05, 2009

Qt 4.6.0 on AVR32 and it’s bugs: Error: symbol `.L987' is already defined

Qt 4.6.0 is out – there is only one source archive now, “-everywhere-“, which includes the linux embedded version also. I tried to build it for the avr32 embedded system; but, a nice compiler error prohibited it.

Error: Symbol `.L987' is already defined

compiling painting/qbrush.cpp
{standard input}: Assembler messages:
{standard input}:5426: Error: symbol `.L987' is already defined
{standard input}:5544: Error: symbol `.L1012' is already defined
{standard input}:6236: Error: symbol `.L1129' is already defined
{standard input}:6353: Error: symbol `.L1154' is already defined
make: *** [.obj/release-shared-emb-avr32/qbrush.o] Fehler 1

Googling it

This error is usually based on using “inline” and “asm” statements in combination – see, e.g. It looks like an internal compiler error, but is usually a coding error.

“This is not a gcc bug, you cannot declare a label in an inline-asm that is going to be exposed.”

“Is there a reference of some sort? I was unable to find one with google.”
”No, just a general rule as inline-asms will be copied when inlined or cloned. What is happening here is clon(n)ing is happening so we generate two copies of that function.”

So, there is some inline method that contains assembler code – the code is duplicated b/c of the inline interpretation – and thus, the assembler labels are duplicated.

Different Compiler?

I do not think that Nokia explicitly inserted the inline+asm-combination. It is probably some gcc optimization. Unfortunately, it is not possibly to use a different compiler than the one provided by atmel in “buildroot-avr32-2.3.0” (

Using built-in specs. Target: avr32-linux-uclibc Configured with: /avr32/buildroot-avr32-v2.3.0/toolchain_build_avr32/gcc-4.2.2/configure --prefix=/avr32/buildroot-avr32-v2.3.0/build_avr32/staging_dir --build=x86_64-pc-linux-gnu --host=x86_64-pc-linux-gnu --target=avr32-linux-uclibc --enable-languages=c,c++ --enable-__cxa_atexit --enable-target-optspace --with-gnu-ld --with-gmp=/avr32/buildroot-avr32-v2.3.0/toolchain_build_avr32/gmp --with-mpfr=/avr32/buildroot-avr32-v2.3.0/toolchain_build_avr32/mpfr --enable-shared --disable-nls --enable-threads --disable-multilib --enable-sjlj-exceptions --disable-libmudflap --disable-libssp --with-build-time-tools=/avr32/buildroot-avr32-v2.3.0/build_avr32/staging_dir//bin Thread model: posix gcc version 4.2.2-atmel.1.1.3.buildroot.1

Change Optimisation Level

So, we have to find a way to avoid the above error. If it is an optimization problem, let’s just use some other one.

goto src/gui, patch Makefile, add –O to the end of CXXFLAGS, call make .obj/release-shared-emb-avr32/qbrush.o, and it runs through – now, you can remove the –O again, and compile the rest of it.

Alternatively, remove the @ in the beginning of CXX = …, call make, get the avr32-linux-uclibc-g++ statement, and add –O there.

Using –O3 does not work – then, the inlining is done even more often, resulting in triple definitions (or rather duplicate errors about duplicate defintions).

No idea where the error stems from, but at least, Qt 4.6.0 now compiles.


Btw … to get that far, a few more changes are necessary. Those are, in short, in package/qtopia/

and you have to remove all the mouse driver statements – I have no idea how those are configured, now … most of them compile automatically, I may still be missing the tslib driver
### Mouse drivers
# remove all of them for Qt460
QTOPIA4_CONFIGURE += -no-mouse-qvfb

Fixing 4.5.x Colour Depths

The previous version(s), 4.5.x, had a bug concerning a code optimisation that relied on non-aligned access – OK for i386 and followers, not OK for many other systems. Some were explicitly excluded, avr32 was not. The bug triggered when converting bitmaps between 24 and 16 bit colour depth, e.g., when connecting via vnc, or when using certain display depths directly.
See about fixing qt_blend_argb24_on_rgb16() in qblendfunctions.cpp.
Seems that fix is in 4.6.0, at least.

PS: QtScript

I get a compile error in QtScript – some problem with pthread_getattr_np:

compiling ../3rdparty/javascriptcore/JavaScriptCore/runtime/Collector.cpp
../3rdparty/javascriptcore/JavaScriptCore/runtime/Collector.cpp: In function 'void* QTJSC::currentThreadStackBase()':
../3rdparty/javascriptcore/JavaScriptCore/runtime/Collector.cpp:682: error: 'pthread_getattr_np' was not declared in this scope

Interestingly, the code does take care of uclibc 0.9.xx:

#if defined(__UCLIBC__)
// versions of uClibc 0.9.28 and below do not have
// pthread_getattr_np or pthread_attr_getstack.
#if __UCLIBC_MAJOR__ == 0 && \
(__UCLIBC_MINOR__ < 9 || \
(__UCLIBC_MINOR__ == 9 && __UCLIBC_SUBLEVEL__ <= 30))
#include <stdio_ext.h>
extern int *__libc_stack_end;

Oops, there are two different Collector.cpp:

One does define UCLIBC_USE_PROC_SELF_MAPS etc. as a workaround for uclibc 0.9.30, the other doesn’t – another bugreport to Nokia.

Just add -no-script to the configure options, for the moment.

Just a short note - I published both problems as Qt bug reports: QTBUG-6550 and QTBUG-6551

Have fun with it …

Friday, September 18, 2009

Single to Dual/Multi Core with Windows XP (ACPI HAL)


Sometimes, people have the problem that their Windows XP is running as a “single core” system, only, although there are two or more cores installed. This can usually be seen in the task manager, where performance shows only a single graph, although “one diagram per CPU” is selected.
I talk about cores here, the same applies to multiple processors. Those are usually used on servers, though, where few people would start “fixing” the HAL.

Hardware Abstraction Layer – HAL

The reason is usually, that the normal ACPI HAL (Hardware Abstraction Layer) has been installed which does not support more than one core. In theory, XP recognises when a new CPU is installed, re-checks, and installs the correct hal. This does not always work, though.
The usual HALs are
  • Standard PC
  • ACPI Uni-Processor (UP)
  • ACPI Multi-Processor (MP)
  • MPS Multi-Processor (APIC)
Especially in the context of virtual machines running under SUN VirtualBox, this does not work. When a windows XP system is installed with only one core active, and more core(s) is/are added later on, the CPU “id” does not change … no redetection is done, the other cores are not recognised. Another reason may be that the “ACPI PC” is not the same as “ACPI UP”, and only ACPI UP is able to support, and thus detect, more than one CPU. In the case of VirtualBox, the situation may also be caused on the IO-APIC not being enabled on install. According to SUN, XP Uniprocessor with IO-APIC is slower than without.
Officially, there is no way to upgrade the HAL. You can downgrade it, though, which is not a good idea … there is no way back.

Using devcon.exe: Forced HAL Upgrade

I found a very interesting post about the “devcon.exe”, a tool provided by MS as “the command line version of the device manager”, and its application to the HAL switch problem.
The important stuff:
Just execute the following commands in a cmd.exe shell …
devcon sethwid @ROOT\ACPI_HAL\0000 := +acpiapic_mp !acpiapic_up
devcon update c:\windows\inf\hal.inf acpiapic_mp

Of course, this may completely break your system. Make backups, find out beforehand how to restore, etc. Using a virtual box system, this is easy, just make a snapshot and restore if the upgrade does not work.
After running devcon, you reboot the system; after the reboot, XP re-detects all hardware, and the requests another reboot. This may change some of the hardware names, especially the LAN connection may get the number 2.

Hidden Devices

You can use the show-hidden trick of the device manager to display the old devices and remove them:
set devmgr_show_nonpresent_devices=1
start devmgmt.msc

Switching the HAL

Once you get a ACPI MP hal, you can switch it to whatever you like using the device manager. Just select “new driver” for the “Computer” device.

Doing it manually

Before I found the post by “hedrums” above, I tried to do it manually. Interestingly, none of the files is locked by windows. They are loaded so early and completely in the boot process that they are no longer “needed” (used) in a running system, and can just be replaced.Yet, I never managed to copy and rename the files correctly.
Afterwards, by comparing the file sizes between the used versions (in system32) and the restore/backup versions (in ServicePackFiles\i386), I recognised the following:
• ntoskrnl.exe == ntkrnlmp.exe (2,04MB)
• ntkrnlpa.exe == ntkrpamp.exe (1,93MB)
• hal.dll == halmacpi.dll (131kB)
So, by copying the three files ntkrnlmp.exe, ntkrnlmp.exe and halmacpi.dll to system32 and renaming them to ntoskrnl.exe, ntkrnlpa.exe and hal.dll, respectively, you may be running an ACPI MP hal. I have not tried this, though.
Alternatively, using the correct ACPI UP files (whichever they are), you may get a UP system that, in turn, is automatically able to detect the multicore and again, install the correct drivers …


I now found a tool that does the same stuff, a bit more comfortable: