TlsException on Mono

on Friday, August 15, 2014

For reference, if you’re using Mono and trying to use HttpWebRequest and friends to make a connection with a client certificate and you get an exception, it’s because you don’t have any trusted root certificates in the Mono certificate store (even if you are bypassing server certificate validation using the old trick of having a Validation handler returning true). You can fix this by doing

sudo mozroots --import --machine --sync

This will download synchronize the trusted root certificates from the Mozilla LXR web site into Mono’s machine trust store in an automated way. man mozroots for other options.

Exception that you’re likely to get if you don’t have a machine trust store:

Unhandled Exception:
System.Net.WebException: Error getting response stream (Write: The authentication or decryption has failed.): SendFailure ---> System.IO.IOException: The authentication or decryption has failed. ---> Mono.Security.Protocol.Tls.TlsException: Invalid certificate received from server. Error code: 0xffffffff800b010a

at Mono.Security.Protocol.Tls.Handshake.Client.TlsServerCertificate.RemoteValidation (Mono.Security.Protocol.Tls.ClientContext context, AlertDescription description) [0x00000] in <filename unknown>:0 
at Mono.Security.Protocol.Tls.Handshake.Client.TlsServerCertificate.validateCertificates (Mono.Security.X509.X509CertificateCollection certificates) [0x00000] in <filename unknown>:0 
at Mono.Security.Protocol.Tls.Handshake.Client.TlsServerCertificate.ProcessAsTls1 () [0x00000] in <filename unknown>:0 
at Mono.Security.Protocol.Tls.Handshake.Client.TlsServerCertificate.ProcessAsSsl3 () [0x00000] in <filename unknown>:0 
at Mono.Security.Protocol.Tls.Handshake.HandshakeMessage.Process () [0x00000] in <filename unknown>:0 
at (wrapper remoting-invoke-with-check) Mono.Security.Protocol.Tls.Handshake.HandshakeMessage:Process ()
at Mono.Security.Protocol.Tls.ClientRecordProtocol.ProcessHandshakeMessage (Mono.Security.Protocol.Tls.TlsStream handMsg) [0x00000] in <filename unknown>:0 
at Mono.Security.Protocol.Tls.RecordProtocol.InternalReceiveRecordCallback (IAsyncResult asyncResult) [0x00000] in <filename unknown>:0 
--- End of inner exception stack trace ---

at Mono.Security.Protocol.Tls.SslStreamBase.AsyncHandshakeCallback (IAsyncResult asyncResult) [0x00000] in <filename unknown>:0 
--- End of inner exception stack trace ---

at System.Net.HttpWebRequest.EndGetResponse (IAsyncResult asyncResult) [0x00000] in <filename unknown>:0 
at System.Net.HttpWebRequest.GetResponse () [0x00000] in <filename unknown>:0 
at EntryPoint.Main () [0x00000] in <filename unknown>:0 

Cross-compiling C++11 without going mad(der)

on Friday, May 23, 2014

C++11 is all the rage these days. It’s got a ton of new features language- and compiler-wise that are aimed towards fixing the many problems that have constantly plagued C++ over the years. It’s not a perfect language, not by a long shot, it’s ridiculously verbose when following best practices and it’s, well, C++. I’m sure I’ll get flamed.

C++11 support

There’s one particular aspect of C++ that really appeals to me: it is the language that everyone* forgets is actually natively supported on the widest range of desktop and mobile platforms out there - OSX, Linux, Windows, Android, iOS, Raspberry, the newer consoles, etc, etc. And for the lazy programmer (that’s me) that wants to work on one platform and target all of the others without having to recode (much), C++ cross-compilation is tempting.

Now there’s various compilers out there with varying degrees of C++11 support. The one that annoys me the most is VisualStudio, since it’s been the slowest catching up and I would love to work on it. Alas, the only decent version that can actually compile most of the useful C++11 is VS2013, and I can’t even trust it to support proper defaulted constructors, initializer lists or move semantics. Aaaargh! Oh well, Windows is not a good platform for cross-compiling anyway, so let’s ignore VisualStudio and target Windows with GCC instead. Both Clang and GCC are considered done in terms of compiler features and between them, they cover the entire gamut of platforms I want to target (OSX, iOS and PS4 with clang, Linux, Windows and Android with GCC)

C++11 support is not only about the compiler features, but also about the standard library implementations. Clang ships libc++, GCC ships libstdc++. As of clang 3.4, libc++ is pretty much complete (although I hit a few bugs that I found fixed in libcxx/trunk). With gcc 4.8, libstdc++ is mostly complete.

To test compatibility and make sure that my C++11 code, built primarily on clang 3.4 on OSX, would compile and run on the other platforms, my first objective was to grab the libcxx tests, strip out the libcxxisms (like __has_feature() and LIBCPP defines) and cross compile them to run on ios (simulator), android, windows and linux. There’s a lot of tests and I wanted to go at it in stages, so I first tackled the threading tests, and also did some small threaded apps to further check how good the support is for std::future, std::thread, std::async and std::packaged_task. The results were very encouraging: all the code compiled and ran with no issues on osx, win (gcc and mingw x64), android (gcc armv7 and x86), ios (clang armv7 and sim/x86) and linux (gcc x64). Due to the limitations of the architecture, std::packaged_task and other features requiring atomics aren’t supported in armv6, so I decided to skip that arch.

I ran the algorithms tests, and found that the stdc++ shipped with gcc 4.8 doesn’t implement rotate. I get the feeling it’s not in 4.9 either, but I haven’t checked yet. Everything else from that suite built and ran fine.

In the atomics tests, things did not run quite so well. It looks like gcc doesn’t support the following syntax for initializing an atomic by passing a reference:

std::atomic_bool obj(true);
std::atomic_init(&obj, false);

Clang results vs GCC results

That breaks a bunch of tests. There’s other tests broken in that test suite too, another piece that gcc fails to compile is:

#include <atomic>

template <class T>
void test() {
    typedef std::atomic<T> athing;
    athing a;

struct A {
    int i;
    explicit A(int d = 0) : i(d) {}

int main() {

It fails because the constructor of std::atomic is marked noexcept, but the test defines a struct A with an explicit constructor that defaults to no arguments (int d = 0) and it’s not marked as noexcept. GCC complains that the exception specification doesn’t match and fails to compile. Clang has no problems with it. This is one of those where I really am not sure which compiler is right (one works, one doesn’t, I’m tempted to say clang is more correct, but…).

There’s a bunch of other failures that I haven’t investigated yet, and a ton of tests I haven’t run, it’s something that’s going to take a while. This exercise has led me to believe that it’s very viable to cross-compile c++11 code, and by not relying on VS, I can use most features with no issues.

So how do I cross-compile?

I’m doing this on OSX (Mountain Lion) so I can target the widest range of platforms. Ideally, we’d use one compiler frontend and just switch the libraries around, but unfortunately this is not an ideal world, and using separate toolchains for every platform is safer and much easier.


If I can, I prefer to keep all toolchains in a directory called toolchains (obvious naming is obvious). Inside, android toolchains go into android/, windows into windows/, etc, etc. iOS toolchains are served from the system so you can symlink them in or just use the original paths.


Android is pretty simple. Download the Android r9d NDK and dump it somewhere. I don’t use ndk-build directly if I can avoid it, I prefer to build native libraries with the standalone toolchain and later integrate them into apps using the include $(PREBUILT_SHARED_LIBRARY) mechanism that files provide to link prebuilt libraries. To create standalone native android toolchains, you just run

$ANDROID_NDK_PATH/build/tools/ --platform=android-19 --install-dir=toolchains/android/arm --arch=arm --toolchain=arm-linux-androideabi-4.8

This will create an arm toolchain (suitable for all arm archs). For x86, use

--arch=x86 --toolchain=x86-4.8

There’s also a mips version:

--arch=mips --toolchain=mipsel-linux-android-4.8

Here’s an example command line for compiling with the arm toolchain:

$ANDROIDARM/bin/arm-linux-androideabi-g++  -Wall -std=c++11 -fno-rtti -g -O0 --sysroot=$ANDROIDARM/sysroot -march=armv7-a -MMD -MP -MF -fpic -ffunction-sections -funwind-tables -fstack-protector -mfpu=vfpv3-d16 -mfloat-abi=softfp -mthumb -fomit-frame-pointer -fno-strict-aliasing -finline-limit=64 -pthread -DNDEBUG -c test.cpp -o test.o

And the corresponding link step:

$ANDROIDARM/bin/arm-linux-androideabi-g++ --sysroot=$ANDROIDARM/sysroot -no-canonical-prefixes -march=armv7-a -pthread -Wl,--fix-cortex-a8  -Wl,--no-undefined -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now -lstdc++ -lm  test.o -o test

You can run native apps built like this directly on an android device by copying them with adb to /data/local/tmp, which is nice for quick tests and automation that doesn’t require interaction with the Java runtime. For actual real android apps, you build an .so that you then either load from the Java side with LoadLibrary, load dynamically from a native activity (via ldopen, don’t forget to set the library path) or link statically to it. Android is getting good at exposing the system natively, so at this point there’s pratically no need for any Java code if the app does its own UI (like, say, a game).


Again, pretty simple stuff. You’ll need the SDK, of course, and for C++11 on OSX and iOS you really need a recent clang, so XCode 5 is required. You’ll be building for two targets, arm and the simulator (which is i386).

In toolchains/ios, do the following:

ln -s /Applications/ sim

ln -s /Applications/ arm

Note the version of the SDK in the path. Depending on what you have, you’ll need to adjust it.

Here’s an example command line for compiling for the simulator:

g++ -arch i386 -Wall -g -O0 -std=c++11 -stdlib=libc++ -fno-rtti --sysroot=$IOSSIM -D\_XOPEN\_SOURCE=1 -DTARGET\_IPHONE\_SIMULATOR -mios-simulator-version-min=5.0 -c test.cpp -o test.o

And linking:

g++ -arch i386   --sysroot=$IOSSIM -Wl,-syslibroot,$IOSSIM -stdlib=libc++ -mios-simulator-version-min=5.0 test.o -o test

And for iOS arm:

g++ -arch armv7 -Wall -g -O0 -std=c++11 -stdlib=libc++ -fno-rtti --sysroot=$IOSARM -DHAVE\_ARMV6=1 -DZ\_PREFIX -DPLATFORM\_IPHONE -DARM_FPU_VFP=1 -miphoneos-version-min=5.0 -mno-thumb -fvisibility=hidden -c test.cpp -o test.o

g++ -arch armv7  --sysroot=$IOSARM -Wl,-syslibroot,$IOSARM -stdlib=libc++ -miphoneos-version-min=5.0 test.o -o test

To use C++11, the minimum ios version is 5.0, so both command lines set that as a requirement.

$IOSSIM and $IOSARM point to the sim and arm symlinks created earlier. This will use the systems compiler, and only the location of target-specific libraries need to be specific (via sysroot).

The reason I’m dumping these command lines is so that it’s easier to see the parallels and automate them (in my case, via a small makefile).


Ah, Windows. Always the problematic little OS with the wonderful tools and the horrible build systems. Windows, of course, is not easy. Fortunately, it’s not that hard, either, because there’s something called MXE that’s going to make it a breeze to set up.

MXE (M cross environment) is a Makefile that compiles a cross compiler and cross compiles many free libraries such as SDL and Qt. Thus, it provides a nice cross compiling environment for various target platforms, which

It uses mingw32 and mingw64 and supports a ton of libraries. It’s pretty awesome.

Building the cross-compiler

Check out mxe

git clone -b stable

The GCC version it’s set to build by default has a bug, so we need to change it to a newer one.

Edit src/ and change the following lines to:

$(PKG)_VERSION  := 4.8.2
 $(PKG)_CHECKSUM := 810fb70bd721e1d9f446b6503afe0a9088b62986

Build a first version of the compiler and tools:

make MXE_TARGETS='x86_64-w64-mingw32' gcc gmp mpfr winpthreads lua -j4 JOBS=4

By default, gcc is configured to use win32 threads, which kills support for threading and other c++11 features, so after the first build of gcc is done, we’re going to build it again using pthreads.

Edit src/ again and do the following:

On line 12, add winpthreads to the end of the $(PKG)_DEPS list so it looks like

$(PKG)_DEPS := mingwrt w32api mingw-w64 binutils gcc-gmp gcc-mpc gcc-mpfr winpthreads

On line 49, change --enable-threads=win32 to --enable-threads=posix

GCC isn’t configured to support sysroot by default, and it’s really handy to have when you’re cross-compiling, so we’re going to enable that.

Edit src/ and add --with-sysroot to the list of flags, around line 38

Build gcc again

make MXE_TARGETS='x86_64-w64-mingw32' winpthreads gcc -j4 JOBS=4

Et voilá, a cross-compiler. You can now symlink the mxe/usr directory into toolchains/windows/x64 and use it like the android compiler.

Here’s an example command line for compiling for windows:

$WINDOWS64/bin/x86_64-w64-mingw32-g++ -Wall -g -O0 -std=c++11 -fno-rtti -pthread --sysroot=$WINDOWS64  -c test.cpp -o test.o

And linking:

$WINDOWS64/bin/x86_64-w64-mingw32-g++ --sysroot=$WINDOWS64 -lstdc++ -pthread test.o -o test.exe


Linux is GCC, of course, and there’s cross-compilers prebuilt for it. They’re annoying in that binutils wasn’t built with --enable-sysroot, which breaks my routine here a bit, but oh well.

You can find OSX-Linux cross-compilers for x86 and x64 at the crossgcc.rts site. They come in the form of dmg files, which I really don’t understand why. It’s a compiler, not an app, I need to set path flags anyway for it so I want to put it in a place of my choosing, not in the system. I don’t get it. Anyway, the dmg installs into /usr/local, so you can just copy them out from there after it’s done installing, or symlink them into your toolchains directory, like

ln -s /usr/local/gcc-4.8.1-for-linux64 toolchains/linux/x64

Here’s an example command line for compiling for linux:

$LINUX64/bin/x86_64-pc-linux-g++ -Wall -g -O0 -std=c++11 -fno-rtti -pthread --sysroot=$LINUX64 -c test.cpp -o test.o

And linking:

$LINUX64/bin/x86_64-pc-linux-g++ -L$LINUX64 -L$LINUX64/lib64 -static-libstdc++ -pthread test.o -o test

Note the lack of --sysroot during linking. Also note the -static-libstdc++ flag. This flag will ensure that libstdc++ will be linked statically, which will ensure that your app actually runs on whatever linux system you’re going to try to run it on. libstdc++ changes often, and the linking will link a specific version in, which may or may not be found on the target system, with amusing results


Phew! If you got this far, congratulations. This mostly served as a dumping ground for stuff I don’t want to forget, so your mileage may vary. Now go and do some cross-compilation, I’ve got a game engine to write.

* not everyone

Codebits 2014 - 3 days of fun

on Monday, April 14, 2014

Wherein I spend three days demo'ing the Oculus Rift, hacking on a portable VR rig with a Raspberry Pi, riding RiftCycles, and mobilizing the entire medical emergency and firemen staff on call due to an extremely nuclear chili experience (rumours of my demise were greatly exagerated).

This year our usual group occupied the usual couple of tables at Codebits and split up into three projects - Pew Pew Pew!, an attempt at building a portable VR experience of a First Person Shooter with an Oculus Rift, a Kinect and a Raspberry Pi; Wolf of Codebits, a stock exchange built on top of the Meo Wallet infrastructure using the "money" that was distributed for testing to everyone at Codebits; and Nelo, the winner of the event's top prize, a Knee Lock for Polio patients to replace the metal harness that they traditionally have to use, using free and open technology like Arduino, Bitalino sensors and 3D printing and based on the idea of a chinese finger trap.

It was awesome fun, as it usual is, even though I spent a lot of time cursing at SD cards, and the Pew Pew Pew! project, which I did with Bruno Rodrigues, didn't end up fulfilling all its goals. The portability was the primary goal - getting a Raspberry Pi connected to the Oculus Rift and both feeding off a portable USB battery so that the whole thing could be stuffed in pockets and the user could have freedom of movement without worrying that he might drag a laptop with him if he turned too much or moved too far.
Bruno killing some critters with the Raspberry and the Oculus control module in his pockets
It turns out that the Oculus sucks so little power that the USB batteries we had would turn off because they thought they weren't in use... So instead of using two batteries - one for the Raspi and one for the Oculus - we used one for both, so that the Raspi would ensure that the battery would not turn off.

We managed to get the whole thing portable and Quake compiled on the Raspberry before the SD card troubles started and killed off the remainder of our schedule, where we ended up spending most of the time replacing cards, reinstalling Raspbian and trying to get things up and running again. We did manage to do a presentation in the end to show off the concept, Bruno going up on stage, pockets stuffed with cables and boxes, to show off the rig fully portable and running. So now you can guess what I'm going to be working on for the next few days ;)

Congratulations are in order to everyone at the organization for putting together another amazing event, and to everyone that managed to pull together a project while being constantly distracted by all the awesome stuff going on around them! And a special congrats to the Nelo team for pulling off such an amazing idea and stealing the show! Now I wish I were in Portugal more often to play with the Bee 3D printer that they won :-P

Update: A lot of other things happened at Codebits, to wit: RiftCycles (, Nuclear Chili experience (, talks and workshops, Presentation Karaoke (where you have no idea what the next slide is going to have), the Amazing Quiz Show (wherein we learn what 2002::/32 is), Retrocomputing (where a bunch of people have fun with old consoles and computers, including my ZX Spectrum), and so much more!

Formatting git patches for partially transplanting a repository

on Friday, March 08, 2013

So I wanted to move a subdirectory inside a git repository into its own repo, keeping all the history of the changes in the original repository in the new one. With git, copying partial history around is as easy as using git format-patch --root -o directory/to/put/patches/in -- path/to/subdirectory, which will create a numbered patch file for every commit that touched the subdirectory in question. Applying all the patches in the new repository is just a questions of doing git am -3 *.patch.

The problem is, format-patch skips merge commits, which means that there might be missing changes in the patches, which sorta makes things not work.

The alternative way is then to do git log --pretty=email, which outputs a commit in the same format and actually handles merge commits properly. But, of course, I need to do that for every commit that I want to export (and there's a bunch), and I hate doing things by hand.

To that effect, here's a few lines that do the job properly, exporting a list of commit hashes in the proper order and then going through them one by one and exporting each one to a directory, numbered appropriately so they're correctly sorted:

Export the list of interesting commits in the correct order (older to newer)

git log --oneline --reverse -- path/to/subdirectory|cut -d' ' -f1>../patches/list

Create a patch file for each commit on the list

c=`wc -l ../patches/list|cut -d' ' -f6`;for j in $(eval echo {1..$c});do n=`printf "%04d\n" $j` && a=`head -n $j ../patches/list|tail -1` && git log -p --pretty=email --stat -m --first-parent $a~1..$a -- path/to/subdirectory >../patches/new/$n-$a.patch;done

Apply all the patches in the new git repository

git am -3 -p3 ../patches/new/*.patch

The subdirectory I'm taking things out of is 3 levels deep in the original repository, and I don't want to keep the parent directories, so I'm passing -p3 to have git (and the patching process) remove them when applying.

If git am fails to apply a patch, it's very likely that this patch is a merge commit with changes that are already applied. I can check this by doing patch -p3 .git/rebase-apply/##, where ## is the failed patch number reported by git am. Patch will either apply the change or report that a change has already been applied (and do I want to revert it? just say no). If any changes needed applying with patch, I can then add the changed files with git add and do git am --resolved, which will create the commit and continue applying the patches. If there are no changes to be applied, I can just skip it with git am --skip (which is most likely to happen) and continue.

Gnome Developer Experience Hackfest 2013

on Monday, February 11, 2013

The Aftermath

After finally getting rid of a really bad cold, here I am reporting about the DevX hackfest that took place right before FOSDEM, at the Betagroup Coworking Space, a very nice coworking place in Brussels with excellent access and great facilities. The hackfest, organized by Alberto Ruiz (thanks!) and sponsored by the Gnome Foundation, had the goal of improving the application developer experience on the desktop, and lasted for three days, with plenty of discussions on a variety of topics, from tooling and IDEs, documentation, languages, libraries to bundling, distribution and sandboxing (and more).

It was a pretty interesting experience, there were a lot of people participating and due to the nature of the facilities (i.e., we were all in one room together), and lot of discussions bounced around the room and spilled over from group to group. My goal for the hackfest was to work on (and hopefully finish) the tooling required to create Mono bindings for the Gnome desktop in an automated way, so that packagers and developers can make bindings available for any library that supports gobject-introspection. By the end of the hackfest, the bindings tool (called bindinator) was able to bind Webkit with no user intervention, and with gstreamer bindings 95% done  (two bugs still pending), things are looking good for automated C# bindings.

Between hacking and sneezing, we discussed tooling and IDEs, particularly what an IDE should have in terms of features, and what features a language should have to better support an application development environment; i.e., in the case of dynamic languages, a built-in AST is a very good thing to have, since you really want good code completion in your IDE, especially when you're starting on a new platform and aren't comfortable with the available libraries and APIs. Other useful features that went on the list for an IDE would be syntax highlighting (a must on any good code editor), responsive UI, good build infrastructure integration (preferably hiding away the specific build tool details and possibly with its own project format that's independent of specific build tools (looking at you autotools)), debugger support, modularization (for user extensibility). And, preferably, being built with the same tools and languages that are recommended for the platform (dogfooding++).

The language discussion was really *the* topic that dominated the three days. There was a lot of back and forth over the merits and demerits of Python, Javascript, Vala, C and C# throughout the days and into the evening activities. Which language would be the easiest to integrate? What tools are available for each? Debuggers are important and hard to do, code completion is harder in some languages than others; if one were to code an IDE from scratch, what language would be better for the UI, the logic, systems integration? Would floating point fuzziness affect someone doing an accounting app? What type of developers are the target, and what type of apps? Widgets and applets that just create a quick UI over existing libraries? Bigger apps? How many developers are there for every language, and how many things are missing and/or need to be fixed in Vala, or Javascript, or any other language? Should there be a single language recommendation? Two languages? All these and more were put forth and discussed extensively, and will probably continue to be discussed over time (as there is rarely a right answer for most of them). No matter how people feel about the decisions that came out of this hackfest, they can be assured that they weren't taken lightly, or without a fight.

All in all, it was a great hackfest, three days of very productive discussions and hacking. Kudos to Alberto Ruiz for a great job organizing everyone, and thank you to the Gnome Foundation and Andrea Veri for the sponsorship and assistance.

Boston, a hackfest

on Friday, June 29, 2012
The Mono & Gnome Festival of Love 2012 is in full swing here in Boston, thanks to the wonderfully stubborn David Nielsen, which got everyone together, got us a great room to work in at the Microsoft NERD Center, and sponsorship by Fluendo, Xamarin, GNOME and PluralSight.

Day 2 of the hackfest has just finished, and it was quite an eventful day. After a slow start yesterday (particularly for me, as I managed to completely kill OSX so thoroughly that it wouldn't boot and required a full restore (all hail up to date Time Machine backups)), today was a pretty interesting day.

Highlights of the day include a loooong conversation with the gobject-introspection people, determining exactly how broken gir is and how that affects our C# binding generation, loud complaining saved for posterity in (which we're using to track our tasks and found to be a very neat and useful webapp), watching Google IO in style thanks to the wonderful resources provided by the Microsoft NERD Center (really, their facilities are top notch), generally discussing geeky stuff and the road forward for Mono & Gnome, and having a late dinner and lots of margaritas at the Border Café (which is still one of my favourite places in Boston, naysayers be damned).

All in all, a great hackfest, and we're just getting started!

This post brought to you thanks our generous sponsors:

Looking back, going forward

on Wednesday, May 30, 2012
May 11, a sunny day in my little corner of the world, was my last day at Xamarin. I've spent an amazing 9 months working on Mono for Android, but more than that, Xamarin was a continuation of my work in the Mono team that started in 2006 back at Novell. So, in a sense, this is an end of a cycle.

These past 6 years have been life-changing; I dove into professional open source development head first, worked with an amazing team, met a ton of great people, and learned and did so many things that sometimes it's hard to believe it's only been 6 years. Some projects were successful, some not so much, but nothing was ever routine or mundane. Moonlight was a particularly amazing experience, working with C#, C/C++ and JS inside a browser with bridges and refcounting and all sorts of crazy hacks to build an UI toolkit from scratch, and Mono for Android was a inspiring challenge that taught me more about mobile development than I thought possible. Impossible is not a word that the Mono team use much ;-)

A lot of people have been assuming that, since I'm leaving Xamarin, I'm going to leave Mono development altogether. Rest assured, that's not going to happen. :-) There's a lot of projects I want to support in the Mono world, and the Linux/Mono community definitely needs a bit of a pick-me-up, which is why I'll be taking part in the Mono & Gnome Hackfest that's going to happen in Boston June 26 to July 2.

In the meantime, I'll be taking a bit of a break to recharge batteries and get ready for the new challenges ahead. It's going to be an interesting year! :-D