Elsewhere there are things that we all miss, yet it takes just one to notice...

Qt now works with the latest Android SDK/NDK

After a short time to install 5.9.2 of Qt and then without my fingers crossed even, I loaded up Qt and loaded one of the 3D Android compatible examples.

After the first build complaining about API level 1 I fixed that and ran it again. And voila! My phone ran it. All being very slow because it was that water demo one.

Woo hoo! Might be back to Qt for desktop and mobile app dev again.

Java vs C++ for 3D mesh building

A few days ago, playing around with LibGDX again I was getting back into 3D programming and created a very simple demo of a load of spheres and the camera rotating around the scene as below:

Which on my laptop ran just fine. It took about a second to generate all the meshes for each of the spheres.

Took bloody ages on my phone though, even though it’s a quad core 64 bit android phone. Approximately 5 to 8 seconds. Ouch.

None of this nonsense in C++ for starters. I know this from a very old project I was working back in the day. Procedural map generator. More about that here: PROCMAP DBP LINK

So for the last couple of days (actually on the evenings because of work), I’ve looked into creating a single sphere mesh and then duplicating it. In LibGdx

A minor problem to start off with is that a lot of LibGdx’s mesh generators have been deprecated and I don’t know how long for now, and I can’t find examples on how to use the new stuff. This would have made things easier, but that minor problem got bigger.

It seems that ModelBuilder only runs on the GL thread, so I tried the MeshBuilder. No luck there after trying to hunt info down on google, (Maybe google is hiding it from me :P) how to use the thing.

So I gave up, closed the project, might go back to it if the next step fails.

I’m waiting on a huge update from Qt. Oh! At this time it has finished downloading and updated.

The previous versions of Qt didn’t work with the latest Android SDK/NDK. Fingers crossed with this one…

Next…

Android App for setting up devices

I started this last week sometime before the hackathon and finally got back on to it tonight. Most of my Android programming has been purely native C++, so setting up a UI is causing a bit of pain.

The first couple of screens are fine, actually the third screen is now, I click something and it loads up a new Intent. The first screen is just buttons, the second screens fetches devices that are not setup and displays those.

When a device that hasn’t been setup has been selected then the next screen pops up. I already know how to pass data from one activity to another so that’s not the problem. The issue at the moment is the UI layout. The first attempt threw all the widgets to the top of the display when it was run. Then I realised I needed to setup things up using layouts. So far so good, I’ve got all my inputs, some with buttons next to the text input field.

Most of my UI stuff has been laying stuff out and then resizing everything to the display and accounting for the aspect ratio. This is alien to me but I am getting used to it as I go along.

I know that on Android I can also use portrait and landscape modes, to which I haven’t yet sussed out yet, but I do know they use fragments. I’ll get to that soon enough.

The Android app is simple enough anyway, so there’s little need to fancy it up. It needs to do a few jobs only. And they are all using a basic UI and no graphics (yet).

There’s no time tonight to do the communication between the app and the server, but essentially it will assign the device to an area and a category and give it a name.

The following step from this will be assigning the playlist, which, erm, I do want to use thumbnails for.

EDIT:

Using the putExtra, I’ve sent the stuff I need to the new UI screen. I’ve done that before so it was easy this time.

Blender progress

I’ve got the mother-in-law down for a few days so programming after work is out because I can’t ‘zone’ with all the conversations going on.

This hasn’t stopped me from working with Blender to increase my skills with it though.

I can now easily import 3D models and animate them. Slamming text and jiggling them around is very easy too. All in all, another good demonstration video produced in a little over an hour. Fast moving 3D animated text and models with a video textured background. I’ve impressed myself this time. No stopping me now.

The server software and the Android app development will continue within the next two days when I can ‘zone’ again. There’s not much left to do on the media management software. Then there’s the device management through an android app. I’m also looking into SSL and other cryptography methods for secure communication all round.

Server media management

I’m glad to announce that my media server manager is very almost completed. The files upload to their respective categories and the Java application handles the media library perfectly.

It took a bit of learning Swing in Java but that wasn’t hard at all.

There’s a couple of things to do in the media manager before it is completed. Remove media and categories from the server. And add descriptions to the media files. And that’s it.

The next step is the Android application to manage devices so that I can set them up on the go without the need to get back to my PC.

Unknown devices (those that are ready to be set up) will be stored in the root directory of the media server. The Android application will then assign it a category and a playlist.

All looking good now.

(I’m having a bit of difficulty with WordPress Code Snippets so I’ve been unable to add any snippets recently)

More progress towards my project

I’ve been doing odds and sods today.

  1. Play a list of videos on a continuous loop full screen. By using libvlc and the Java wrapper I’ve finally got a full screen player working in less than 60 lines of code which plays a list of video files.
  2. Write a server and test over the internet. Using just a Java server, the initial test worked which was to accept a connection and send “Hello world!” back. Now that it is running I can expand the server with all the functionality I require. This will be handling the back-end database, media and installations.
  3. Define a KISS (Keep It Simple Stupid) database for handling devices and media.

To do:

  1. Write a mobile Android app to manage the setup of devices and assigning play lists. This is going to be a big one and a lot of work and I need to be on site when setting up devices and getting them playing without glitches.
  2. Expand on the software for the media player so that it can update itself from the server with not only media updates, but also software updates.
  3. Run an outside test live over the internet and update the playlist.
  4. Get more experience with Blender and video creation. For the most part, I’m more than capable of producing the videos in approximately an hour for everything I need. The more I get familiar with Blender then I can add more effects to the videos which will be a bonus.
  5. Bully test the server. I already have someone on hand that can test the servers integrity and stability. Pen-testing the server will give me some good pointers to how to make it more secure. I’m also considering the 2 way login.

That’s the plan for the next couple of weeks. Just so long as I get some free time I can move along quite quickly with this with the exception of the bully testing and updating.

Right now, things are looking almost bullet proof. Fingers crossed.

EDIT:

I forgot to add the VPS (Virtual Private Server)

Variable names are important

For the last two days I’ve been working on decoding video frames and displaying them. The idea is to run against the system clock which I’ve had working everytime so far. This time though, I decided to make a general decoder so that I didn’t have to worry about FFMpeg’s video decoded format which up until now has always been YUV420 which I just had a GL using SL shader convert on the fly. Everything gets converted to an RGBA bitmap so I don’t have to worry about it.

Anyway, variable names…

I buffer up the frames and then test them against the current system clock for a frame that’s ready to be presented. I had two variable names, frame and frame_temp which are pretty innocent. Unfortunately though, and which I kept missing whilst going through the code, I was grabbing a pointer in frame_temp and testing it. If it fell below the current system time then it would then be assigned to frame. If there was a previous frame then it would be cleared from the buffer queue and added to the unused buffer queue.

Now this is what I missed.

Frame_temp is first used, frame is unitialised (actually null), the tests are done on frame_temp and if failed then just exits with a null pointer. Fair enough.

It was a “typo”.

Hidden between four lines of code using frame_temp I had used the variable frame instead (which is null to start with). When I accessed a method on that class it would crash.

The error message was very obscure and nothing on google search or anywhere else was any help.

My mistake…

Most of the time I rely on debug log outputs. And whilst running this code, everything was running just as it should. It’s hard to follow multiple threads in debug output.

Finally, I ran it through gdb which took me straight to the line of code where it crashed. The line right after the “typo”.

Lesson learnt…

It’s been a while

Yes, most definitely it has been some time since I last posted anything on my blog. That’s because this last month I have hardly had any real chill time to myself. Well, I have had some, but it never lasts for long enough.

Anyway, I’ve still been faffing about. I recently purchased CopperCube so that I could delve deeper into WebGL. Plus I’m familiar with Irrlicht, and CopperLicht makes working with 3D graphics a doddle.

I’ve also found that I can actually call myself a full-stack developer considering recent projects I’ve been working on. One project involves a C/C++ TCP/UDP server for communications, A Java TCP server to use Java AWT graphics, GWT for a web interface instead of installing software. The server does many things, too many to mention, but I’m well chuffed with it.

Setting up the server involves freshly installing Linux. Then installing Apache Tomcat. Making a few modifications to the system and then installing the software.

Device connect on the network via ethernet or wifi and will automatically detect the location of the server because of the UDP heart beat. Clever stuff really. Probably not, but all the same, it works great.

I’ve also played around with having a home TCP server which can be made use of from my mobile phone while I am out, my tablet or a PC, as well as a GWT web interface.

After playing around with all of this, there’s lot’s more I want to play with. Maybe Vaadin or similar. Move on to desktop 3D graphics again instead of openGLES 2.

I’ve also finally got myself an i7 laptop with nVidia graphics. Cheap off eBay! So I can develop on the go. That’s if the battery is good in it.

There’s other things over this last month that have tickled my fancy, but I won’t mentioned that here. Tempting though.

Hopefully this weekend I will have a long one as I’m booking Friday and Monday off because I seriously need to relax a little.

Until next time…

Good and bad for embedded and Android

Embedded Linux is the bees knees for doing a lot of things, especially when you really don’t want the Android overhead on top of it. For a device such as the Odroid range which can be used as a headless server providing and IPTV gateway and a web interface for controls, linux is the better option.

When it comes to graphical applications, for example decoding video streams and openGL rendering, it always becomes a toss up between Linux and Android. A lot of people who are familiar with Java will automatically pick Java and I have to say, I do actually like the Java Virtual Machine. The JVM is handy whilst developing because a crashed application just gets cleared out.

Qt can also be used for developing mobile applications and has a very extensive framework and because it’s C++, time critical code isn’t a problem.

The only issue I have with Qt is that when it comes the day when I do want to release and application for whatever it is I finally do, packaging it up for all OS’s becomes a pain.

Qt for Android is easy, just build the project and get the APK. Just one file and it can be copied to any Adroid device.

Qt for Windows is a nuisance because you have to faff about with the terminal (which is awful in Windows) to package all the required libraries, and then it’s not guaranteed to run straight out of the box. For Windows you have to use a dependency walker to find the missing libraries.

Qt for Linux isn’t too bad unless you want the latest libraries. At this moment you can install Qt5.5 from the Linux repositories and away you go. Unfortunately, the latest version of Qt is 5.7 with some very nice additions so it’s back to packaging stuff up.

For embedded Linux it would have to be Qt from the Linux repositories.

For Android it is still a toss up between Qt and Android Studio + NDK. Although if I could get away with it I would use Qt and Linux unless it was going to be an application for the Android market.

LibGdx TextureAtlas (non .pack file)

Okay, I hit a block when I was starting to think about writing a quick networked multiplayer shoot em up game in LibGdx.

LibGdx texture atlas’s are very handy things because you can store a load of smaller images in one image. This also optimises the GPU draw routines because it doesn’t have to keep swapping textures for each sprite drawn.

I found pre-made free to use sprites from Kenney.nl, below is a section of what they look like:

spritesheet

Unfortunately though, LibGdx uses .pack files for it’s texture atlas’s and this one come with one that looks like this:

<TextureAtlas imagePath="sprites.png">
 <SubTexture name="spaceAstronauts_001.png" x="998" y="847" width="34" height="44"/>
 <SubTexture name="spaceAstronauts_002.png" x="919" y="142" width="37" height="44"/>
 <SubTexture name="spaceAstronauts_003.png" x="824" y="0" width="50" height="44"/>

...
</TextureAtlas>

Which is completely different to a .pack file.

So without further ado I decided to test out an xml parser to retrieve all the sub-regions from the xml file.

Test code as follows:

[snippet id=”19″]

So, armed with this now, all thanks mainly to working with GWT, I can quickly move on to creating my own class to generate the texture atlas in LibGdx.

Posts navigation

1 2
Scroll to top