Strong Typing PODs

Identifiying object by a numerical integer is an almost trivial task. In DISPLACE, a very complex simulator I am involved with, there are many of different categories of objects that model different categoris of agents in the simulator, like vessels, marine locations, ships, ports, and so on.

Each object has an Id, and it was very natural, at the start of the project, to implement it using an int.

Problems started when we decided, for optimization purpose, to shrink the memory footprint, to use smaller int instead.

Indeed, there could be at most 65535 nodes, and at most 256 node types, and each was modeled with an int.

The mess started when we found that ids were mixed, and assigning a "Vessel Id" to a "Node Id" was very easy… with obvious, tragic effect.

So there were two problems here: the type opacity, and the strong typing. 

Using PODs were natural, but POD can be assigned to each other (weak types) and, since their storage is predefined, they cannot be changed easily (they aren't opaque).

If the PODs we're using do not require complex methods but they must only represent an integer (a POD) with some meaning, a concept, wrapping them in a class can be tedious. This is why I created the following header file:

template <typename T, typename TAG>
class strong_type {
    T value;
    using Base = T;
    using Tag = TAG;

    explicit strong_type (T v)
    : value (v) {}

    T get() const { return value; }


The class is very simple: Base and Tag allow extracting the types, Base of the container/Wrapped POD, and Tag of the structure implementing the Tagging of the new type. I implemented Get() to access the POD value, because we don't want the type to be deduced using a cast.

Here the test file (I'm a fanatic of TDD, of course.)


#include <gtest/gtest.h>
#include <strongtype.h>
#include <type_traits>

TEST(StrongType, usage)
    struct IdTag {};
    using ID = strong_type<int,IdTag>;

    ID i (10);

    struct ValueTag {};
    using Value = strong_type<ID::Base, ValueTag>;

    Value j (10);

    static_assert(!std::is_convertible<ID,Value>::value, "ID and values are convertible!");

    ASSERT_EQ(i.get(), j.get());

Note the usage of static_assert to check that the types cannot be assigned or converted automatically by the compiler.

You can find the snippet here.

Happy Coding!


Permanent link to this article:

A C++ Smart Pointer wrapper for use with JNI


In this article, I'll propose a solution for managing native C++ resources from JNI using smart pointers.

While Smart Pointers can't be useful from Java, because of the limits of the Java memory management, it may be required by the native library to maintain allocated resources through shared_ptr or unique_ptr, for example because classes derive from std::enable_from_this.

Since there is a fixed pattern to maintain native objects in Java classes, a utility C++ class is proposed.

The problem of memory management when integrating Java and C++ code

Recently, I had to port a library I wrote for Linux to Android.

For instance, the library is aimed at implementing an OO interface to Bluetooth Low Energy service, using an USB dongle (BlueGiga BLED112), and avoiding the use of both Bluez and DBus.

Bluez's Low Energy support was not ready for production code when we started this project, and I found the DBus interface unnecessarily complex for use with C/C++ applications. But anyway, there were many missing features that we needed, so we switched to some hardware solution, and one was the BlueGiga dongle.

I proposed my customer to open source the library, so perhaps one day we'll release our solution. 

The library was entirely developed in C++11 under Linux on ARM platform, using very small dependency except for boost, needed to have some feature like atomic and futures on ARM.

As any modern C++ program should, it makes large use of smart pointers as shared and weak pointers, so the memory management is entirely automatically handled by the smart pointer logic.

Then one day my customer asked me to develop a version of this library to be used on Android.

I had two options: since the dongle is managed through a serial port, I could rewrite the logic that implemented the read/write to the serial port, and the protocol, the parsing and constructing of each packet, all the classes modelling the Input and Output endpoints (i.e. the Low Energy Characteristics), and so on. I also had to write all the tests, because no library comes without a set of supporting code to demonstrate the many different use case scenarios.

Or, simply recompile the C++ library under Android and build a Java library that used the native library through jni code.

I would like to write about many problems I had with this task, and I can't exclude, if I ever will find the time, to write here about.

But one of the most annoyng aspect of this task was the facts that Java and C++ (either "classic" and "modern") have two completely incompatible memory management system.

For those who don't catch the difference, the situation is this: C++ is very precise on objects lifecycle, either by letting the user to decide using the "classic" allocation/deallocation system through new/free calls, or by taking care of that with use of the smart pointers. When a smart pointer exits his scope, the pointed object is destroyed if there exists no other pointer that shares this object. So the object lifecycle is defined in a pretty predictabily way.

Java, on the other hand, isn't so precise. Objects are created when they are instantiated, but they are destroyed whenever the JVM decides their time has come. No predictability here.

Every Java programmer knows that this pones a lot of problems even with the most common operations on common objects like files, sockets, and so on. If the object needs to be deinitialized before disposing, user must manually call a proper operation. Java provides a `finalize()` function that's called when the system disposes the object, but it is up to the JVM, or more precisely to the Garbage Collector (GC) to decide when this happens.

This is the first aspect of the problem. You have a C++ library that's designed to use smart pointers to manage the relations between the classes, you have a Java Library that uses it but requires the user to manage the allocation/deallocation of objects, and you have to glue them together.

Note also that in the native library, many objects are instantiated during the use of a feature, and some objects are owned by the user, so their lifecycle becomes somehow independent from the object that created it.

For example, suppose we have an `adapter` object that creates a `protocol` object when needed, and this object can create one or many other instances of the class `characteristic`. All of them are passed through smart pointers. Adapter, Protocol and Characteristic can have different lifecycles, though it has not much sense for a Characteristic to survive his Adapter object.

In C++ this poses not much problem: the objects are kept alive untl all the smart pointers are valid, so either the children objects are keeping alive the father, or their reference is invalid, if they use a weak pointer. It is a user responsability to select the proper pointer and the proper strategy.

In Java we have something to take care: the native object reference management, and the management of their lifecycle.

Usually, if you have a native pointer to keep in a Java object, you use a Java long type, that is a jlong in jni terms.

So the JNI code must also keep track of the object type, because if you allocate an object instance in jni, this object must be kept in a jlong Java field and it must also be retrieved and deleted at proper time.

Another issue is, object owning must be carefully managed: suppose you have a native class AN that have a relation to the class BN, and the classes AJ and BJ must own them, how you manage their lifecycle in a safe way? If AJ exits the scope before BJ, then AN must be disposed as well, but it must not dispose BN until BJ is ready to be disposed.

If you are using Smart Pointer, perhaps the life is easier, perhaps no. Anyway, there's no such a thing like a smart pointer in JNI terms, you can keep a native pointer in a jlong variable, but std::shared_pointer is not a raw pointer, is an object with different internal fields.

The pattern on storing native pointers in java code is: allocate the pointer, cast it to jlong and store it in some long java field. To retrieve it, read the long java field, cast it back to the original pointer.

Additionally, when disposing the java object, you must retrieve the raw pointer from long java field, cast it back to the original type, delete it.

The following functions do exactly this:

#include <jni.h>

jfieldID inline getHandleField(JNIEnv *env, jobject obj)
    jclass c = env->GetObjectClass(obj);
    // J is the type signature for long:
    return env->GetFieldID(c, "nativeHandle", "J");

template <typename T>
T *getHandle(JNIEnv *env, jobject obj)
    jlong handle = env->GetLongField(obj, getHandleField(env, obj));
    return reinterpret_cast<T *>(handle);

template <typename T>
void setHandle(JNIEnv *env, jobject obj, T *t)
    jlong handle = reinterpret_cast<jlong>(t);
    env->SetLongField(obj, getHandleField(env, obj), handle);


The code should be self-explanatory. The getHandleField() function simply retrieve the jfieldId value from the java object passed as argument. The field has a fixed name, "nativeHandle" (an improvement is to make it codable).

getHandle and setHandle simply make the necessary cast. They are templatized so you can write


auto ptr = getHandle<MyObject>(env,object);


and you have your raw-pointer-to-object in ptr.

 But what for smart pointers? if your Object derives from std::enable_from_this, you must keep it in a smart_pointer, otherwise shared_from_this() will fail with a bad_weak_ptr exception.

So what you need is to allocate a smart pointer in the heap through a new. Or wrap it in a class, better if templatized:


#include <memory>
#include "handle.h"
#include "jnihelpers.h"

/** @brief a Wrapper for smart pointers to be used in JNI code
 * **Usage**
 * Instantiation:
 * SmartPointerWrapper<Object> obj = new SmartPointerWrapper<Object>(arguments);
 * obj->instantiate(env,instance);
 * Recovery:
 * std::shared_ptr<Object> obj = SmartPointerWrapper<Object>::object(env,instance);
 * or
 * SmartPointerWrapper<Object> wrapper = SmartPointerWrapper<Object>::get(env,instance);
 * std::shared_ptr<Object> obj = wrapper->get();
 * Dispose:
 * SmartPointerWrapper<Object> wrapper = SmartPointerWrapper<Object>::get(env,instance);
 * delete wrapper;
 * or simpler
 * SmartPointerWrapper<Object>::dispose(env,instance);
template <typename T>
class SmartPointerWrapper {
    std::shared_ptr<T> mObject;
    template <typename ...ARGS>
    explicit SmartPointerWrapper(ARGS... a) {
        mObject = std::make_shared<T>(a...);

    explicit SmartPointerWrapper (std::shared_ptr<T> obj) {
        mObject = obj;

    virtual ~SmartPointerWrapper() noexcept = default;

    void instantiate (JNIEnv *env, jobject instance) {
        setHandle<SmartPointerWrapper>(env, instance, this);

    jlong instance() const {
        return reinterpret_cast<jlong>(this);

    std::shared_ptr<T> get() const {
        return mObject;

    static std::shared_ptr<T> object(JNIEnv *env, jobject instance) {
        return get(env, instance)->get();

    static SmartPointerWrapper<T> *get(JNIEnv *env, jobject instance) {
        return getHandle<SmartPointerWrapper<T>>(env, instance);

    static void dispose(JNIEnv *env, jobject instance) {
        auto obj = get(env,instance);
        delete obj;
        setHandle<SmartPointerWrapper>(env, instance, nullptr);


Here it is. 

Use it this way: to instantiate, create the object and call instantiate(env,obj):

SmartPointerWrapper<Object> obj = new SmartPointerWrapper<Object>(arguments);

To recover the object smart pointer, use object():

SmartPointerWrapper<Object> wrapper = SmartPointerWrapper<Object>::get(env,instance);
std::shared_ptr<Object> obj = wrapper->get();

And to dispose/destroy:

SmartPointerWrapper<Object> wrapper = SmartPointerWrapper<Object>::get(env,instance);
delete wrapper;

// or



Of course dispose must be called explicitly from java (do not use it from finalize()).


Happy Coding!

Permanent link to this article:

Vmware Player is slow under Ubuntu 16 host, Part 2.

So it seems the issue I talked about some week ago, wasn't fixed.

Windows is still slow, the hard disk always working at 100% load. Not so slow, because the fix explained in my previous post seems to mitigate the problem, but still slow.

Since I was using many Virtual Machines on that box without any problem since longtime, well before my upgrade to Ubuntu 16.04, I guessed the problem was the kernel. So I rolled back to 3.16.

And… tada! now the machine works flawlessly. Seems incredible, because I looked around to see if anyone had similar issues, without any luck.

So, if you're running kernel 4.4 and VmWare Player 15 is slow: try to roll back to 3.16.


Permanent link to this article:

Vmware Player is slow under Ubuntu 16 host

So you just upgraded your Ubuntu box and found that all your VmWare Windows Boxes are extremely slow?

The symptom is your Windows task manager showning 100% usage of your disk, and the WIndows Instrumentation showing very long lags, says, up to 2.5seconds (yes, seconds) to respond to a disk request. This makes the box practically unusable.

Perhaps your vmware logs are shoing a lot of lines like

scsi0:0: Command READ(10) took 2.748 seconds (ok)
scsi0:0: Command WRITE(10) took 3.224 seconds (ok)

and so on…

Well, this is a well known issue. A quick search with google will point you to the vblank_mode=0 fix. 

Indeed I already fixed that on my box, my /etc/profile file already exported it.

But for some (obscure) reason, it doesn't work anymore. Again, a (this time) long search will point you to the correct answer: this post suggests  to check another variable if you are using closed sources drivers for Nvidia, __GL_SYNC_TO_VBLANK=0.

Indeed this worked very well. I don't know why it changed, I suspect that the Ubuntu upgrader silently changed from open source to closed source drivers. Perhaps I tried them and when I switched to Open Source drivers, something remained. Really, I don't know, if someone has the response, please share it.

Now my Windows Boxes are fast enough to be used; now I have to deal with Windows Updates, but this is yet another story.


Permanent link to this article:

A compact CSV parser using C++ TMP

How many times have you implemented a CSV parser to fill a list of stucts from a text file?

Personally, it happens to me about every time I have new project. Though it is a simple task that requires no more than half an hour to build and debug, it is boring, error prone and repetitive. The algorithm is quite trivial, read each line, split it in a vector of string using some separator (about always a comma, hence the "Comma Separated" name), use something to convert each field in a PoD (int, double, whatever) and fill a struct. 

Here's a general solution that makes use of Metaprogramming to implement a parser with strong type checking.

I'll leave you as an exercise to implement the stream process, here we just take a string and parse it to a variadic tuple using boost::lexical_cast and some TMP.

#include <bits/c++config.h>
#include <type_traits>
#include <string>
#include <tuple>

#include <boost/lexical_cast.hpp>
#include <boost/algorithm/string.hpp>

template <std::size_t I = 0, typename Iterator, typename ...Ts>
inline typename std::enable_if< I == sizeof...(Ts), void>::type
parse( std::tuple<Ts...> &tuple, Iterator it)

template <std::size_t I = 0, typename Iterator, typename ...Ts>
inline typename std::enable_if< I < sizeof...(Ts), void>::type
parse( std::tuple<Ts...> &tuple, Iterator it)
    std::get<I>(tuple) = boost::lexical_cast<typename std::tuple_element<I, std::tuple<Ts...> >::type >(*it);
    parse<I+1, Iterator, Ts...>(tuple, ++it);

int main(int argc, char *argv[])
    std::string sample = "first,second,3,4,5.01,sei";
    std::tuple<std::string,std::string,int,int,double,std::string> values;
    std::vector<std::string> fields;

    boost::split(fields, sample, boost::is_any_of(","));

    std::cout << fields << std::endl;

    parse(values, fields.begin());

    std::cout << values << std::endl;


My favourite use of this helper is within a reader loop that calls a functor passed as an argument that takes the converted tuple and make the final use.

Something like this….


    using VesselDataType = std::tuple<std::string,

    int i = 0;
    auto VesselLoaderFunc = [&i] (VesselDataType data) {
        std::cout << i << " Vessel: " << data << std::endl;
    if (!reader.importFromStream<VesselDataType>(strm,"|", VesselLoaderFunc)) {
        std::cout << "Error loading vessels" << std::endl;


Happy coding 🙂

Permanent link to this article:

The dbus-cpp Library, introduction

While working on a bluetooth based embedded device, I needed to deal with the somehow infamous dbus.

DBus and Bluez5 have a very steep learning curve, and if you are don't have access to either Qt framework or glib framework, you have indeed a problem.

Glib are very complex C libraries that mimick some of the built-in feature of the C++ language, while on the other hand, Qt are very vast and sometime a big overkill. The stunning news is that it seems there aren't many options if you need a C++ only lightweight solution.

It is not true, indeed. Somehow well hidden in the creases of the web, I found an awesome alternative that is at the same time elegant, very well designed, modern and lightweight. The library is named DBUS-CPP.

Be warned: there are two other c++ libraries, one called dbus-c++ and another with the same name. But both are old and unmaintained. You could also try to use the low-level api, through libdbus, but the documentation itself discourage from using it. Before finding the actual solution, I spent some week trying to implement my code this way, and believe in me, even the most obvious operations are long, complex, repetitive, boring and error prone. 

The dbus-cpp library I want to introduce is homed on launchpad, at this url:

The library is written in pure c++11, it is mostly based on template metaprogramming and for this reason his learning curve is not gentle. But once the basic ideas behind the library are clear, you can appreciate his elegance. Unfortunately there is no documentation and no tutorials, but the developers make great use of unit testing, so there are a lot of test code that can be studied. There are 3 or 4 examples, but they only provide example on how to access a dbus service as client, not on how to implement dbus services.

This post is the first of a series. They are the result of my personal studies and applications on the embedded project. My idea is to fill the gap and provide people some ideas on how to start. dbus-cpp is difficult to find on the web, but the efficiency that can be reached with it makes me think it deserve much more visibility. 

I want to thank the development team leader, Thomas Voß, and his team, for the library and for the time he dedicated me to answer my questions and clarify my doubts. I hope these articles could be a significant contribution.

Later note: linux distributions usually provide earlier version of dbus-cpp. For example, Ubuntu 14.04 LTE provides dbus-cpp 2.0. The latest version available from the sources are 5.0. I strongly suggest to download the latest version from the source repository (they use bazaar) and compile it. They use CMake as a building tool, so compiling and installing is very easy. On the other hand, Ubuntu 15.10 provides the latest version.


Permanent link to this article:

Pairing Bluetooth with no pin on Intel Edison

It may seems strange for an IoT enabled device, but there's no obvious way to pair an Intel Edison through Bluetooth without accessing the shell.

I spent different hours to do something that with Bluez4 took at most 4 minutes, including the research on Google.

Here's what I did.

Edison provides Bluez5 that reworks most of the tools using DBUS, so to make two quite frequent actions, like pairing a device without providing user intervention on the device, and registering a new bluetooth RFCOMM service, you need to interface with it, that isn't so obvious from C or C++.

The second action can be done if bluetoothd is started with legacy support and using the usual sdp service.

Regarding pairing, you must add an agent that responds with a set of fixed responses, like fixed / predictable pin, authorization confirmation and so on.

Bluez5-testtools package provides a simple-agent python script that can be easily modified. It also provides the support script bluezutils.

You can find both on this link to GIST. Hope that helps.

Permanent link to this article:

Fixing problems with Ubuntu 13.04 and VmWare Tools

Installing Ubuntu on my VmWare fusion box has never been so difficult as it was last weeks, when I needed to have it ready as soon as possible.

Though 12.04 LTS has been proved to be a great choice when you need a solid rock linux box, latest Ubuntu boxes are preferable, on my opinion, for development purpose.

This way, switching to the latest, bleeding edge kernel hasn’t gone well for the people at VmWare.

There are two very big problems with the default installation. I usually choose the default installation because you only need the ubuntu ISO image and few minutes, and a new, shining linux box is ready to serve.

Not this time.

First, you’ll have problem with the resolution change. The very first thing you’ll be doing after the installation completes is logging-in, change the resolution to higher one (1440×900 on my mac) and… crash.

Your box will be stuck in the login screen, and you’ll not be able to login anymore.

Editing the .config/xfce/xfconf/xfce-perchannel-xml/displays.xml will make you able to be back to 800×600, but nothing else.

The solution is really simple, just increase the box memory size. By default the installer will install 1024M of RAM and 1 core, even if you have more memory available and a multicore cpu.

Anyway, increasing the ram (I set it to 4096Mb) will allow you to change to any resolution.

Second, some feature of the vmware tools will be disable, in particular the “shared folder” tool. This is because the vmware tools installer is not compatible with the kernel 3.8, so you’ll need a fix. Installing the vmware will not work, due to this incompatibility, the compilation will not even start because the installer will expect a version.h header that was removed on 3.8.

The solution: nacholito on the community has posted this article where a script and relevant patches can be downloaded. You’ll need to download the 3 files (I packaged them in a single zip archive for your convenience), unzip it on the same directory of your vmware tools targz, and execute the

Then use the installation script as usual.

Thanks again to nacholito for his post.

Permanent link to this article:

TouchWiz and the onOffsetChanged missing call

I am currently working on a Live Wallpaper project for a customer.

Interestingly, I found that though my app works perfectly on different devices – I have a Nexus7, Nexus10, a Galaxy Tab p1010, different chinese allWinner devices, and a Note II phone – my wallpaper can’t scroll when the screen is swiped on some Samsung Device.

Usually to detect the swiping on a Live Wallpaper, you use the WallpaperService.Engine.onOffsetChanged, i.e. overriding this method on your engine class.

Unfortunately, the Samsung launcher app, TouchWiz, seems to have a bug, or maybe it’s an undocumented/not compliant feature: the onOffsetChanged method is never reached, because simply they don’t call it (or maybe they don’t call the setWallpaperOffset, as explained in the Android Docs).

So you need to use some workaround for this.

Here’s mine: override the onTouchEvent method and simulate the scrolling.

The scroll will need a value for the offset, and it will be calculated from the swipe gesture; but unfortunately you’ll not be able to have the exact position of the screens, this means you’ll not detect if the screen “bounces back” as when you make a short swipe.

So your scrolling will be “desynchronized” with the screen. This could not be a problem, but you may prefer to use the correct method (onOffsetChanged) when available, and the “hack” when the o.s. doesn’t provide the call.

But you don’t have any way to know how the launcher app will behave, you need to start assuming that onOffsetChanged is NOT available and switch on standard method if it does.

This is how:

First, set a mScrollingWorking boolean as false. You’ll need to do this anytime the surface or view is created, because the use may change the launcher.

Then, whenever the onOffsetChanged is called, set it as “true”.

Then, override the onTouchEvent and if mScrollingWorking is false, simulate the scrolling.

This is my code:


      private boolean mScrollingWorking = false;
      private float tw_oldx, tw_fx, tw_sign;

	public void onTouchEvent(MotionEvent event,
			GLWallpaperService.GLEngine engine) {
		if (!mScrollingWorking) {
			if (event.getPointerCount() < 1)
                        float xinc = 0.0f;
 			float xstep = 0.333f;
 			float x;
 			switch(event.getAction()) {
 			case MotionEvent.ACTION_DOWN:
 				tw_fx = event.getX(0);
 				tw_oldx = tw_fx;
 			case MotionEvent.ACTION_MOVE:
 				xinc = (event.getX(0) - tw_oldx) / (float)screen_width;
 				x = screen_x - xinc * xstep;
 				tw_oldx = event.getX(0);
 			case MotionEvent.ACTION_UP:
 				tw_sign = Math.signum(event.getX(0) - tw_fx);
 				if (tw_sign > 0.0) {
					x = (float) (Math.floor(screen_x / xstep) * xstep);
				} else {
					x = (float) (Math.ceil(screen_x / xstep) * xstep);

				if (x < 0.0f)
 					x = 0.0f;
 				if (x > 1.0f)
					x = 1.0f;

			setOffset(x, 0.0f, xstep, 0.0f);
        public void onOffsetChanged(float x, float y, float xstep, float ystep, int xpo, int ypo) {
		mScrollingWorking = true;
		setOffset(x, y, xstep, ystep);

	private void setOffset(float x, float y, float xstep, float ystep) {
      // Here you can scroll the background
                screen_x = x;

Note that the code on the ACTION_UP label will round the offset with the nearest page as it would do a working onOffsetChanged() call. But as said, you may have a situation where the launcher bounces back to the starting page and your code is scrolling on the next. Sorry: blame Samsung, not me 🙂

Remember that on newer version, you’ll need to use setOffsetNotificationsEnabled(true) to enable raw touch event delivery.


The original code has two problems: first, as hybridtv noted, some variable was missing and needed some clarification.

While the role of tw_oldx and screen_width are trivial, screen_x need to be explained. It stores the older offset, and it is updated in the setOffset() function.

Second, there was a bug in the code. TouchWiz and our code need to be as synchronized as possible while moving, so when the finger is released from the screen (touch_up) we need to round the current screen. Also the offset need to be limited between 0.0 and 1.0.

Thanks to hybridtv for pointing out.

Permanent link to this article:

Flashing an AllWinner device via LiveSuit from Windows 8

Flashing an AllWinner device using the LiveSuit tool from Windows 8 isn’t as obvious as it seems.

First, as usual with a new operating system, not all drivers works and you must find the “right” version.

Second, as the previous issue wasn’t enough, with Windows 8 isn’t possible without some trick to install an “unsigned” driver.

“unsigned” driver means a device driver that isn’t approved by Microsoft; on previous Windows versions, when installing an unsigned driver Windows asked for approval to the user. Today, Windows 8 doesn’t ask for approval and sometime refuses the installation without telling anything, it silently exits the driver installation window.

So, first you need to force Windows to fall back to the “old” and polite behavior, gently asking the user to install though warning the user for potential damage. Then manual install the drivers and run the livesuit.

To instruct Windows to accept unsigned drivers, you need to restart it in “troubleshooting” mode.

  1. Move to the right, bottom corner of the screen, a sidebar pops up and select the gear (“Settings” option)
  2. Select the “Change PC settings” option and go tin the “general” tab.
  3. At the bottom of this page, you’ll see an “advanced startup” option, click on “restart now” and the pc will restart.
  4. At boot, a new menu will popup: select “troubleshooting” and the option #7, “install unsigned drivers”.

The box will start normally, but you’ll have the option to tell the O.S. to install any unsigned driver.

Now you’ll have to manually install the LiveSuit drivers, because simply running the app will not work.

So power off you tablet, press the “volume up” button and while pressing insert the USB cable. Then press 7-8 times the power button. Windows should tell you that it need some driver to be installed, but it will fail.

Then, start the driver management window, select the non-working device, select “update drivers” and manually choose the Livesuit directory. Windows warn you that “installing unsigned driver can be dangerous”. Anyway, accept the risk and install the drivers.

After that, running LiveSuit should work as usual.

The LiveSuit version I used is “1.07-2011026”, and you can find it on the internet by looking for “LiveSuitPack_version_1.07_2011026.exe” on google.


Further links:

For a detailed (but not 100% exact) procedure to enable unsigned driver installation, see this post. Note that you must click on “Change PC Settings”, not on the “power” button.

Permanent link to this article: