Wim's Space

Subscribe to Wim's Space feed Wim's Space
About things I solved or trying to solve.
Updated: 8 hours 42 min ago

Double precision issues

Thu, 04/26/2018 - 17:13

Due to some interaction with a Swagger related project and a simple yet fast and easy to use C# JSON library SimpleJSON I used in the code generated by my swagger parser and code generator, I stumbled upon some issues regarding storing integers as double. The SimpleJSON library removed the task for generating huge amounts of model classes to deserialize REST call results into.

SimpleJSON uses double as internal storage, as that is fine as long as integer numbers are 32 bits. The trouble and mind-blowing issues start when you have to use 64 bits integers. In my case the Int64 numbers are sometimes used as ID’s of things to fetch. So they have to be exact. In the following text I ignore the unsigned integral numbers but they exhibit the same issue.

A simple examination shows both int64 and double are 8 byte data structures, so where’s the problem?

Double.MaxValue   (1.7976931348623157E+308)

is way larger then

Int64.MaxValue   (9223372036854775807 or 9.223372036854775807E+18)

But problems arise in the proximity of Int64.MaxValue to be precise.

The coding first attempt was to use the following in C#:

Double d = Double.Parse(Int64.MaxValue.ToString());

At first glance it returns a strange and incorrect value of 9.223372036854776E18, which is almost correct the correct value of 9223372036854775807, except that it’s only 7 off the correct value and 2 digits are wrong.

Given the byte-wise size of 8 for a Double, this is understandable, it reserves 52 bits for the fraction, 11 bits for the exponent and 1 bit for the sign (See IEEE Standard 754 Floating Point Numbers).

An Int64 in comparisment has a 63 bit integral part and 1 sign bit. So it can never fit with full precision into the Double fraction. It’s not the byte size of the double that is the limit, but the precision that is less because the double also contains a exponential part.

Doing the same with an Int64, e.g, load a number to big to represent, like:

Int64.Parse(“9223372036854775808”)

throws a nice out of range error.

The cause in this case is clear: the input is larger then the type’s MaxValue. When using a Double, Int64.MaxValue is still magnitudes  smaller than Double.MaxValue, therefor not triggering the same out of range error.

Trying to go safer with:

Double.TryParse(Int64.MaxValue.ToString(), out Double d)

returned true (e.g. no problem during conversion) and the same value that was 7 off. expected was false as the conversion is not flawless.

Even stranger is trying to convert the Double d outcome to a string using:

Double.TryParse(Int64.MaxValue.ToString(), out Double d); d.ToString(“F0”)

returned  “9223372036854780000” instead of the expected value 9223372036854775807. Now it’s a whopping 5 digits off track.

These issues might occur whenever data is stored as tekst and not as binary values, Because in formats like json there is often no way to determine whether a  value is an integer or a floating point:

5

might be a Byte, Int16, Int32 or Int64 but also a Float and a Double.

5.0

on the other hand is clearly a floating point number so a Float of a Double. As Double is the largest of the two, it’s the safes choice, it will fit.

Even a blunt bit by bit copy (just use the Double’s 8 byte as storage) will probably fail as a Double has some bit patterns that signal special numbers like +/- Infinity and NaN or ‘Not a Number’ and might trigger exceptions. Both of these special numbers have their exponential part filled with all 1’s. (See IEEE Standard 754 Floating Point Numbers).

As can be seen above, taking a Double is most of the time (but not always) a safe choice.

So it this all a C# problem/issues? By far!

In Java:

System.out.println(Double.parseDouble(“” + Long.MAX_VALUE));
System.out.println(“” + Long.MAX_VALUE);

returned 9.223372036854776E18 instead of correct value of 9223372036854775807 (so 4 digits wrong, due to some rounding it seems),

In 64-bit Python 3.6:

float(sys.maxsize)
int(sys.maxsize)
print:

returns 9.223372036854776e+18 instead too of the correct value of 9223372036854775807 (so like java 4 digits off).

PS. An unrelated issue is that in C/C++ parsing strings with methods like strtof() into numbers usually stops at the first character that is not understood. One of the returned values of for example strtof() is the index where the parsing failed. So in case of a wrong decimal separator you might end up with only the integral part (so 5 instead of 5.5235).

WHS 2011 Client Backup Drive Full

Mon, 12/21/2015 - 09:53

This week I had that dreaded message for the second time. Probably due to modern GB games that patch themselves regularly and entering the Windows Insider program (so a new Windows 10 version every now and then).

The last time (quite desperate) I deleted one of recent the Data.4096.nn.dat files and did a repair. It worked but I just lost a lot of backups. So I wanted to avoid that at all cost.

What happens when the client backup drive starts filling up is that beyond a certain point the weekly cleanup task will at most only adjust the indexes and not shrink the actual cluster storage files (the Data.4096.nn.dat and Data.512.nn.dat). So even if you mark backups as to be deleted at the next cleanup it still does not free-up disk space. If the disk becomes even more filled-up, even the adjustment of the cluster indexes stops after a few backup-ed machines.

Yesterday I found a much simpler and better (and not destructive to start with)!

First and very important is not to make things worse, so do not forget to stop both backup services so no backups are added during this operation.

It turned out that using the build-in compression feature of NTFS (which happened to be enabled on my client backup drive, so probably by default), could free up the GB’s I needed to get things working again. After compressing around 64 of the smallest Data.4096.nn.dat my free space went up from 4GB to 25GB (around 6GB more that the largest file on the disk).

As my client backup drive is 2TB, I was quite happy that I did not have to compress all files.

After that it was a matter of marking old backups as ‘to be deleted at next cleanup’ and run the clean-up job. After the cleanup it’s best to revert the compression so you can do the trick again if needed.

You can apply the compress attribute by selecting a number of files and right-click them for the property dialog. There use the Advanced Button. It takes a while to compress so take some coffee or better a lunch in the meantime.

For command-line lovers, the command to look for is called compact.

Removing the compression is just a simple compact /u * command from within the Client Computer Backup directory located in ServerFolders on the Client Backup Drive.


Reindexing WHS 2011’s DNLA Server

Sun, 07/19/2015 - 16:16

Searching for this subject reveals a couple of links of which each has some issues. So I started combining code and testing in so I would be able to rebuild the index without restarting the server.

The reason for rebuilding is that the indexing seems to work on directory notifications and also indexes files that are moved out of the indexed folders. If that happens one will start seeing drive letters in the DNLA file lists.

This proved somewhat more difficult. The database called ‘CurrentDatabase_372.wmdb’ is located under the profile of a special user called ‘MediaStreamingAdmin’. The processes related to the DNLA server also run under this account. Basically these are the services whsmss (Windows Server Media Streaming and HomeGroup Service) and WMPNetworkSvc (Windows Media Player Network Sharing Service).

Both services needs to be stopped before an attempt to delete the database can be made.

This however still fails when the server has run for a while and midnight has passed. The reason is that two other processes are started under the same account and accessing the same database (most solutions ignore this and ask to reboot the server before triggering a reindex).

These two processes are WMPAxHost .exe and WMPlayer.exe. WMPAxHost seem to control the WMPlayer.exe process and restart it when terminated. The purpose of these two processes seems to be updating the metadata of the media files with internet based metadata. This is probably the same feature WMPlayer offers when started interactively.

Terminating these processes will not be a problem as they are restarted next midnight. It’s off-course obvious that WMPAxHost has to be terminated before WMPlayer. To kill these processes some force has to be applied (hence the /f switches of the taskkill statements).

During the search for a solution I also came a way to disable and enable the Media Sharing with PowerShell commands. I have not tested if the batch file runs without these two lines as I find it more elegant to disable the Media Sharing feature during modification.

The last trick used is to rename the database file (which is under Windows NT or later allowed on files that are open). So even if final delete of the renamed database in the script fails, next reboot would create a new database.

The complete script looks like:

cd /d c:\program files\windows server\bin

wsspowershell.exe set-wssmediaserverenabled “-enable 0″

net stop whsmss
net stop WMPNetworkSvc

taskkill /f /im wmpaxhost.exe
taskkill /f /im wmplayer.exe

ren “C:\Users\MediaStreamingAdmin\AppData\Local\Microsoft\Media Player\CurrentDatabase_372.wmdb” *.old

net start WMPNetworkSvc
net start whsmss

wsspowershell.exe set-wssmediaserverenabled “-enable 1″

del “C:\Users\MediaStreamingAdmin\AppData\Local\Microsoft\Media Player\CurrentDatabase_372.old”


Using a Denver AC-5000W with Windows (or OS X)

Sat, 10/18/2014 - 17:51

Some weeks ago, the Denver AC-5000W action camera’s where for sale for around € 50. So a lot cheaper than a GoPro and thus nice for testing. As it comes with a underwater housing up capable of withstanding water pressure up to 40m of depth, it’s useful with our scuba diving hobby without spending to much (one could always buy a GoPro later).

But this blog post is not about scuba diving or GoPro versus Denver, but about getting the stuff out of the camera (preferably by Wi-Fi without opening the case). The camera supports Wi-Fi by advertising itself as a Wi-Fi hotspot with a security key ‘1234567890’.

As either the Wi-Fi connection/feature or the mobile iOS/Android software is unstable (I was not able to download all photo’s with either of them and Android was way better in it then iOS), I wanted to know how to get the photos and videos off the camera using a PC.

First I though to disassemble the Android APK file but it proved a bit hard to download this file on a PC (I needed to enter my username/password and device code in a piece of unknown software). But it was not necessary to do this at all.

Then I just tried to connect with a browser to the gateway address (192.168.1.1) of the hotspot the Denver advertises (without luck). Normally this type of devices tend to expose a embedded webserver (like the average Wi-Fi router).

Ping this IP address however worked.

To be able to see a bit more of what goes on, I started with using telnet (so see if a connection was able at all). I tried ‘telnet 192.168.1.1 http’ so a webserver, again without luck. The second try was way better, ‘telnet 192.168.1.1 ftp’ because I got a nice welcome message and a prompt for a username.

Next was finding the username and password for this embedded ftp server. First I tried ‘admin’ and as password ‘1234567890’ assuming the programmers did not want to make it that hard. No luck.

Then with a little luck I tried good old ‘root’ as username and once again ‘12345679890’ as password and to my surprise I was in.

The camera shows a simple camera alike SD layout of a root folder DCIM with subdirectories for photos and videos and a additional one for events (no clue yet what that’s for, maybe it’s used for the feature to look at the live camera picture with a mobile device).

So the directory structure is a simple

DCIM     100EVENT   100IMAGE   100VIDEO

With a decent FTP client like FileZilla it’s very easy to transfer all photos and videos to a Windows PC or Apple Mac.

So just put the camera in Wi-Fi mode, connect to the ‘DENVER AC-5000W’ hotspot using ‘1234567890’ as security key.

Then setup a ftp connection to ‘192.168.1.1’ with a normal plain text username and password (‘root’ and ‘1234567890’) and start transferring your photos and videos.

Transfers run most smooth if you set your ftp client software to a single (one) transfer at a time.


WP8 LongListSelector and not correctly updating ContextMenu’s

Wed, 08/13/2014 - 14:06

These last days I have been working on a simple WP8 app that uses TvDb.com to keep track of the next/upcoming series episode to watch.

I made extensively use of the LongListSelector combined with a ContextMenu from the WP8 Toolkit found at CodePlex. I want to be able to short tap (navigate) and long tap (context menu). The DataContext supplied is a Dictionary hence the Key, Value and  KeyValuePAir stuff present in the code.

For the xaml I used code like this to make sure my C# code would be able to know which episode to mark as watched when a user long taps a list item (note: I removed all non essential attributes)

1: <phone:PivotItem Header="upcoming">

2: <phone:LongListSelector ItemsSource="{Binding UpcomingEpisodes}" >

3: <phone:LongListSelector.ItemTemplate>

4: <DataTemplate>

5: <StackPanel Tag="{Binding Value.Id}" Tap="Upcoming_Tap">

6: <toolkit:ContextMenuService.ContextMenu>

7: <toolkit:ContextMenu DataContext="{Binding Value.Id}" >

8: <toolkit:MenuItem Header="mark as watched" Click="UpcomingWatched_Click"/>

9: </toolkit:ContextMenu>

10: </toolkit:ContextMenuService.ContextMenu>

11: <TextBlock Text="{Binding Key.SeriesName}" />

12: <StackPanel Orientation="Horizontal">

13: <TextBlock Text="{Binding Value.EpisodeAndSeason}" />

14: <TextBlock Text="{Binding Value.EpisodeName}" />

15: </StackPanel>

16: </StackPanel>

17: </DataTemplate>

18: </phone:LongListSelector.ItemTemplate>

19: </phone:LongListSelector>

note: I removed all non essential attributes.

The C# code is quite simple:

a) For the short tap I use:

1: private void Upcoming_Tap(object sender, System.Windows.Input.GestureEventArgs e)

2: {

3: if (sender is FrameworkElement && (sender as FrameworkElement).Tag != null)

4: {

5: Int32 id = Int32.Parse((sender as FrameworkElement).Tag.ToString());

6:

7: // etc

8: }

9: }

 

b) For the long tap I use:

1: private void UpcomingWatched_Click(object sender, RoutedEventArgs e)

2: {

3: if (sender is FrameworkElement && (sender as FrameworkElement).DataContext != null)

4: {

5: KeyValuePair<Serie, Episode> dc = (KeyValuePair<Serie, Episode>)((sender as FrameworkElement).DataContext);

6:

7: //etc

8: }

9: }

note: My DataContext is a KeyValuePair so I need to do some typecasting here.

The problem is that after marking a couple of episodes as read, the DataContext of the ContextMenu is not update correctly anymore and I keep marking things watched  I do not see in my LongListSelector.

After using Google for two days and found a ‘çomplex’ workaround I did not  get working at the one following links ‘we-secretly-have-changed’ or ‘dlaa’ I stumbled across an article at codeproject that led to the solution (I did not get the codeproject code to work in my project but searching for it at msdn did).

I modified my code a tiny bit at three places.

a) I added a

x:name=”UpcomingItem”

attribute to the topmost StackPanel element that defines an LongListSelector Item.

b) I changed the binding of the ContextMenu from”

{binding Value.Id}

into

{Binding ElementName=UpcomingItem},

effectively binding the ContextMenu to it’s parent StackPanel named UpcomingItem (so NOT to it’s DataContext anymore).

1: <phone:PivotItem Header="upcoming">

2: <phone:LongListSelector ItemsSource="{Binding UpcomingEpisodes}" >

3: <phone:LongListSelector.ItemTemplate>

4: <DataTemplate>

5: <StackPanel Tag="{Binding Value.Id}" Tap="Upcoming_Tap" x:Name="UpcomingItem">

6: <toolkit:ContextMenuService.ContextMenu>

7: <toolkit:ContextMenu DataContext="{Binding ElementName=UpcomingItem}" >

8: <toolkit:MenuItem Header="mark as watched" Click="UpcomingWatched_Click"/>

9: </toolkit:ContextMenu>

10: </toolkit:ContextMenuService.ContextMenu>

11: <TextBlock Text="{Binding Key.SeriesName}" />

12: <StackPanel Orientation="Horizontal">

13: <TextBlock Text="{Binding Value.EpisodeAndSeason}" />

14: <TextBlock Text="{Binding Value.EpisodeName}" />

15: </StackPanel>

16: </StackPanel>

17: </DataTemplate>

18: </phone:LongListSelector.ItemTemplate>

19: </phone:LongListSelector>

note: the phone:PivotItem has nothing to do with the problem described in this post.

c) Finally in the C# code I had to modify the retrieval of the DataContext dc variable as the (sender as FrameworkElement).DataContext is now  a StackPanel object instead of the KeyValuePair of the  original code.

So I changed the line 5 of the C# code piece above to read:

1: StackPanel sp = (StackPanel)(sender as FrameworkElement).DataContext;

2: KeyValuePair<Serie, Episode> dc = (KeyValuePair<Serie, Episode>)(sp.DataContext);

Finally the ContextMenu nicely works on the LongListSelector Item when long tapped, even when the underlying DataSource is updated.


Debugging PHP

Mon, 11/12/2012 - 11:35

As an old fashioned programmer I grew up with debugging methods like post-mortem traces and trace statements.

Today however we have and are used to GUI’s for debugging and can single step code or even re-compile code and retry the operation. This is all nice in environments where applications can be frozen. If not, like web pages and applications depending on real-time communication with devices the single stepping alone ruins the applications workings and thus the debugging process.

Here old fashioned trace message and a viewer for them come in handy again. Normally on Microsoft Windows one uses the OutputDebugString() API. For PHP this API call was missing so I implemented a simple PHP extension that wrapped the API in two ways. One is just the call and the other way is as member function of an object. 

Big advantage of the OutputDebugString() API is that if there is no viewer active, the output is just ignored and vanishes into thin air, leaving no traces like massive log files. Also good to know is that it’s impossible to ruin http headers etc as the output is redirected to something else then the web browser.

The extension was written in Borland Delphi using the easy to use Php4Delphi library. As viewer one can use the free DbgView from Sysinternals.

The result is a very easy to use extension that can be left in the code for as long as one want/needs.

The following snippet test of the module is indeed loaded properly by the PHP interpreter:

1: $module = "log"; 2:  3: if(!extension_loaded($module)) { 4: echo "Log Module not Loaded"; 5: exit; 6: }

 

This snippet uses the php_log class:

1: $log = new php_log(); 2: $log->cleardebugwindow(); 3: $log->outputdebugstring("PHP test log class", $log->info);

 

The cleardebugwindow() method send a special message to DbgView clearing the display. Outputdebugstring() takes two parameters, the message and a severity string. This last parameter is handy for grouping the messages or be able to search on certain types. It is not necessary to use the built-in types like info, warning or error, any tag is allowed.

The following code is not using classes:

1: outputdebugstring("PHP test module", 'error');

 

The sources can be downloaded from this link. In order to compile it, you’ll also need to download php4delphi and configure it correctly for your PHP version and off-course a Borland Delphi version.


How to automate inclusion of versioning info in Java beans

Wed, 08/15/2012 - 15:00

This post is about how to solve a problem that bugged me for while. 

When developing Portlets for Liferay one always wonders what exact version is actually running on the various servers of the development chain (local/test/integration/production) and what sources it was compiled from.

  • The first solution (when still using CVS/SVN):

When using CVS or SVN this can easily be solved by using keyword expansion and have either one of these systems update the MANIFEST.MF file containing the keyword placeholders.

For SVN adding keywords to the MANIFEST.MF file and simply adding a couple of lines to the build.xml file that change the MANIFEST.MF file (so it gets marked for check-in every time), the requested information can be included pretty automatically.

1: <?xml version="1.0"?>

2: <!DOCTYPE project>

3: 

4: <project name="my-portlet" basedir="." default="deploy">

5: <import file="../build-common-portlet.xml" />

6: <tstamp>

7: <format property="TimeDate.Now" pattern="yyyy-MM-dd HH:mm:ss" />

8: </tstamp>

9: <manifest file="docroot/META-INF/MANIFEST.MF" mode="update">

10: <attribute name="Ant-Build-Stamp" value="${TimeDate.Now}" />

11: </manifest>

12: </project>

In the MANIFEST.MF file one needs to include the following placeholders:

1: Svn-Revision: $Revision$

2: Svn-Author: $Author$

3: Svn-Date: $Date$

Finally in Eclipse one needs to add the keywords to the MANIFEST.MF file (this feature is hidden in the MANIFEST.MF context menu under TEAM|Set Property…. In the resulting dialog choose ‘svn:keywords’ and enter ‘Revision Date Author‘ on a single line.

CVS always expands the keywords if present so only the Ant script is needed.

Getting hold of this information and making use of it is similar to the Mercurial solution presented next.

  • The second solution (after switching to Mercurial):

This method adds Mercurial information to the MANIFEST.MF unfortunately this is immediately outdated the moment one does a commit/push (because then a new revision is created).

BUT when the portlet is build for deployment the information in the MANIFEST.MF is updated to the correct values prior to compiling. The deployment build itself does not commit any files so no new revision is created. So when deployed one can easily find out what exact source code is used and compiled.

The first part is to prep the build.xml file so it retrieves and writes/updates this information into the MANIFEST/MF file.

A plain Liferay generated build.xml file looks like:

1: <?xml version="1.0"?>

2: <project name="my-portlet" basedir="." default="deploy">

3: <import file="../build-common-portlet.xml" />

4: 

5: </project>

By inserting/include the following xml just after the import task we retrieve the wanted information and write/update it into the MANIFEST.MF file. This file and the docroot/META-INF directory where it should reside are also created if missing.

1: <import file="manifest.xml" />

 

The manifest.xml should contain the following Ant script:

1: <?xml version="1.0"?>

2: <project>

3: <if>

4:

5: <!-- Execute ony once, saves time -->

6:

7: <not>

8: <isset property="hgrevision" />

9: </not>

10:

11: <then>

12:

13: <!-- Test Operating Systems -->

14:

15: <condition property="isUnix">

16: <os family="unix" />

17: </condition>

18:

19: <condition property="isWindows">

20: <os family="windows" />

21: </condition>

22:

23: <!-- Set Mercurial Executable -->

24:

25: <if>

26: <isset property="isUnix" />

27: <then>

28: <property name="mercurial" value="hg" />

29: </then>

30: </if>

31:

32: <if>

33: <isset property="isWindows" />

34: <then>

35: <property name="mercurial" value="${env.ProgramFiles}/Mercurial/hg.exe" />

36: </then>

37: </if>

38:

39: <echo message="Using: ${mercurial}" />

40:

41: <!-- Run Mercurial for Information -->

42:

43: <exec executable="${mercurial}">

44: <arg value="id" />

45: <arg value="-n" />

46: <redirector outputproperty="hgrevision" />

47: </exec>

48:

49: <!-- Trim any trailing + sign -->

50:

51: <script language="javascript"> 1:

2: var hgrevision =

3: project.getProperty("hgrevision");

4: project.setProperty("hgrevision",

5: hgrevision.replaceAll("[\+]", ""));

6:

</script> 52: <echo message="Local Revision: ${hgrevision}" />

53:

54: <exec executable="${mercurial}">

55: <arg value="id" />

56: <arg value="-t" />

57: <redirector outputproperty="hgtags" />

58: </exec>

59: <echo message="Tag: ${hgtags}" />

60:

61: <exec executable="${mercurial}">

62: <arg value="id" />

63: <arg value="-b" />

64: <redirector outputproperty="hgbranch" />

65: </exec>

66: <echo message="Branch: ${hgbranch}" />

67:

68: <exec executable="${mercurial}">

69: <arg value="log" />

70: <arg value="-r${hgrevision}" />

71: <arg value="--template" />

72: <arg value='&quot;{date|isodate}&quot;' />

73: <redirector outputproperty="hgdate" />

74: </exec>

75: <echo message="Date: ${hgdate}" />

76:

77: <exec executable="${mercurial}">

78: <arg value="log" />

79: <arg value="-r${hgrevision}" />

80: <arg value="--template" />

81: <arg value='&quot;{node}&quot;' />

82: <redirector outputproperty="hgnode" />

83: </exec>

84: <echo message="Node: ${hgnode}" />

85:

86: <exec executable="${mercurial}">

87: <arg value="log" />

88: <arg value="-r${hgrevision}" />

89: <arg value="--template" />

90: <arg value='&quot;{node|short}&quot;' />

91: <redirector outputproperty="hgsnode" />

92: </exec>

93: <echo message="Short Node: ${hgsnode}" />

94:

95: <exec executable="${mercurial}">

96: <arg value="log" />

97: <arg value="-r${hgrevision}" />

98: <arg value="--template" />

99: <arg value='&quot;{author}&quot;' />

100: <redirector outputproperty="hgauthor" />

101: </exec>

102: <echo message="Author: ${hgauthor}" />

103:

104: <exec executable="${mercurial}">

105: <arg value="log" />

106: <arg value="-r${hgrevision}" />

107: <arg value="--template" />

108: <arg value='&quot;{repo}&quot;' />

109: <redirector outputproperty="hgrepo" />

110: </exec>

111: <echo message="Repository: ${hgrepo}" />

112:

113: <!-- Ant Built-in properties -->

114:

115: <echo message="Java Version: ${java.runtime.version}" />

116: <echo message="Java Vendor: ${java.vendor}" />

117:

118: <!-- Liferay properties (see build.properties of SDK) -->

119:

120: <echo message="Liferay-Version: ${lp.version}" />

121:

122: <!-- Ant Build timestamp -->

123:

124: <tstamp>

125: <format property="TimeDate.Now" pattern="yyyy-MM-dd HH:mm:ss" />

126: </tstamp>

127:

128: <!-- Create META-INF if missing -->

129:

130: <if>

131: <not>

132: <available file="docroot/META-INF" type="dir" />

133: </not>

134: <then>

135: <mkdir dir="docroot/META-INF" />

136: </then>

137: </if>

138:

139: <manifest file="docroot/META-INF/MANIFEST.MF" mode="update">

140:

141: <!-- Mercurial Information -->

142:

143: <attribute name="Hg-Revision" value="${hgrevision}" />

144: <attribute name="Hg-Tags" value="${hgtags}" />

145: <attribute name="Hg-Branch" value="${hgbranch}" />

146: <attribute name="Hg-Date" value="${hgdate}" />

147: <attribute name="Hg-Node" value="${hgnode}" />

148: <attribute name="Hg-Short-Node" value="${hgsnode}" />

149: <attribute name="Hg-Author" value="${hgauthor}" />

150: <attribute name="Hg-Repository" value="${hgrepo}" />

151:

152: <!-- Ant Built-in properties -->

153:

154: <attribute name="Java-Version" value="${java.runtime.version}" />

155: <attribute name="Java-Vendor" value="${java.vendor}" />

156:

157: <!-- Liferay properties (see build.properties of SDK) -->

158:

159: <attribute name="Liferay-Version" value="${lp.version}" />

160:

161: <attribute name="Ant-Build-Stamp" value="${TimeDate.Now}" />

162: </manifest>

163: </then>

164: </if>

165: </project>

Notes:

  • The above script expects hg Mercurial executable to be on the path under Linux and to be installed in “c:\program files\Mercurial” when running under Windows.
  • The Short-Node consists of the first 12 characters from the full Mercurial ”’Node”’ hash. This corresponds to the change set value that Source forge displays when browsing the repository.
  • The Revision is the local revision within a project repository (in contrast to the system wide values of node and short-node). It corresponds to the change set number that Source forge displays when browsing the repository.
  • Because of the small piece of JavaScript to cleanup the revision number, Java 1.6 or later is required.

The resulting MANIFEST.MF looks like:

1: Manifest-Version: 1.0

2: Ant-Version: Apache Ant 1.7.1

3: Created-By: 20.4-b02 (Sun Microsystems Inc.)

4: Hg-Revision: 27

5: Hg-Tags: tip

6: Hg-Branch: my-portlet

7: Hg-Date: 2012-08-14 16:41 +0200

8: Hg-Node: cdd2c1242f3e3f528ff1e71022570d57d1b9a342

9: Hg-Short-Node: cdd2c1242f3e

10: Hg-Author: me

11: Hg-Repository: 9876543219101112131415161718192021222324252621

12: Java-Version: 1.6.0_33-b03

13: Liferay-Version: 6.0.12

14: Ant-Build-Stamp: 2012-08-15 12:17:12

15: Java-Vendor: Sun Microsystems Inc.

16: Class-Path:

The lines between Created-By: and Class-Path: are originating from added the Ant build.xml script.

  • Finally how to retrieve this information in out Java code:

We need to add some utility methods to our code:

1: /**

2: * Retrieves a property from /META-INF/MANIFEST.MF without a prefix.

3: *

4: * @param property

5: * the property to retrieve

6: * @return the property value, 'na' or 'error'.

7: */

8: public static String getManifestProperty(final String property) {

9: if (_log == null) {

10: _log = LogFactoryUtil.getLog(MyPortlet.class);

11: }

12: 

13: Properties prop = new Properties();

14: 

15: try {

16: prop.load(FacesContext.getCurrentInstance().getExternalContext()

17: .getResourceAsStream("/" + JarFile.MANIFEST_NAME));

18: 

19: String rev = prop.getProperty(property);

20: 

21: return rev == null ? "na" : rev.trim();

22: } catch (IOException e) {

23: _log.error("Error retrieving " + property + " Property from /"

24: + JarFile.MANIFEST_NAME + " (" + e.getMessage() + ").");

25: }

26: 

27: return "error";

28: }

29: 

30: /**

31: * Retrieves a property from /META-INF/MANIFEST.MF with an optional prefix.

32: *

33: * @param prefix

34: * a prefix like 'Svn-' or 'Hg-' in our case.

35: * @param property

36: * the property to retrieve

37: * @return the property value, 'na' or 'error'.

38: */

39: public static String getManifestProperty(final String prefix,

40: final String property) {

41: return getManifestProperty(prefix + property);

42: }

43: 

44: /**

45: * Retrieves a 'Hg-' prefixed property from /META-INF/MANIFEST.MF and

46: * cleans the $Keyword$ definition from the result.

47: *

48: * @param property

49: * the property to retrieve

50: * @return the property value, 'na' or 'error'.

51: */

52: public static String getManifestHgProperty(final String property) {

53: return getManifestProperty("Hg-", property)

54: .replace("$" + property + ": ", "").replace(" $", "").trim();

55: }

56: 

57: public static String LPad(final String str, final int length) {

58: return LPad(str, length, ' ');

59: }

60: 

61: public static String LPad(final String str, final int length, final char car) {

62: return str

63: + String.format("%" + (length - str.length()) + "s", "")

64: .replace(" ", String.valueOf(car));

65: }

66: 

67: public static String RPad(final String str, final int length) {

68: return RPad(str, length, ' ');

69: }

70: 

71: public static String RPad(final String str, final int length, final char car) {

72: return String.format("%" + (length - str.length()) + "s", "").replace(

73: " ", String.valueOf(car))

74: + str;

75: }

and call use code for example in the constructor of our Java backing bean like:

1: _log.info("------------------------------------");

2: 

3: _log.info(GroupwallHelpers.LPad("Class:", 32) + getClass().getName());

4: 

5: _log.info(GroupwallHelpers.LPad("Revision:", 32)

6: + GroupwallHelpers.getManifestHgProperty("Revision"));

7: _log.info(GroupwallHelpers.LPad("Node:", 32)

8: + GroupwallHelpers.getManifestHgProperty("Node"));

9: _log.info(GroupwallHelpers.LPad("Short-Node:", 32)

10: + GroupwallHelpers.getManifestHgProperty("Short-Node"));

11: _log.info(GroupwallHelpers.LPad("Tags:", 32)

12: + GroupwallHelpers.getManifestHgProperty("Tags"));

13: _log.info(GroupwallHelpers.LPad("Branch:", 32)

14: + GroupwallHelpers.getManifestHgProperty("Branch"));

15: _log.info(GroupwallHelpers.LPad("Repository:", 32)

16: + GroupwallHelpers.getManifestHgProperty("Repository"));

17: _log.info(GroupwallHelpers.LPad("Date:", 32)

18: + GroupwallHelpers.getManifestHgProperty("Date"));

19: _log.info(GroupwallHelpers.LPad("Author:", 32)

20: + GroupwallHelpers.getManifestHgProperty("Author"));

21: 

22: _log.info(GroupwallHelpers.LPad("Ant-Build: ", 32)

23: + GroupwallHelpers.getManifestProperty("Ant-Build-Stamp"));

24: 

25: _log.info(GroupwallHelpers.LPad("Java-Version: ", 32)

26: + GroupwallHelpers.getManifestProperty("Java-Version"));

27: _log.info(GroupwallHelpers.LPad("Java-Vendor: ", 32)

28: + GroupwallHelpers.getManifestProperty("Java-Vendor"));

29: _log.info(GroupwallHelpers.LPad("Ant-Build: ", 32)

30: + GroupwallHelpers.getManifestProperty("Liferay-Version"));

31: 

32: _log.info("------------------------------------");

33: 

Final Note: A better type of output would be to use _log.debug() instead of _log.info() so the Tomcat log files are not polluted to much.


Command-line scanning a directory or file with Microsoft Security Essentials

Tue, 08/14/2012 - 12:41

After doing the usual web search with Google and Bing I found only sites claiming doing a scan with Microsoft Security Essentials of a file or directory from the command-line was not possible. Most sites just say it’s possible to initiate a quick or full scan or even update signatures from the command-line. But as Microsoft Security Essentials has real-time protection there is no need to scan manually (as a lot of us are used to) to make sure the file is scanned.

But on Windows 7 (x64 version confirmed) the tools in %ProgramFiles%\Microsoft Security Client contain some files that look promising. My first guess was to look into %ProgramFiles%\Microsoft Security Client\msseces.exe, but that program only pops-up the user-interface and worse case starts a default scan.

As Microsoft Security Essentials is able to scan manually (it has a explorer context menu, located in %ProgramFiles%\Microsoft Security Client\shell.ext.dll)

A far shot was to search for rundll32 to be used to fire the context menu but all I found was references to viruses and trojans doing the same (so not the best way to walk)

Finally I accidently fired up one of the other executables in %ProgramFiles%\Microsoft Security Client with a promising name (MpCmdRun.exe) with a switch –h and voila a long description with the answer tucked inside.

By issuing the command

“%ProgramFiles%\Microsoft Security Client\MpCmdRun.exe” -Scan -ScanType 3 –File “<file or folder to scan>”

one is able to start the command-line version of Microsoft Security Essentials, make it perform a file or folder scan and thus integrate it with popular tools like Winrar like:

Note: although %ProgramFiles% points to c:\program files at both Windows x86 and x64, not all applications will expand it properly to something like c:\program files.


Getting the Name of a C# Component.

Tue, 06/07/2011 - 00:50

Today I had the need to address a C# Component I wrote and that is part of a larger C# UserControl by name (so by a String). I know there are other ways to address components but for some reason I will not explain in this blog post I needed this feature.

The main problem is that Components do not have a Name property although in the Visual Studio Property Inspector components do show a ‘(Name)’ property containing the value I was after.

I started searching the internet but without luck. Plenty of solutions for Controls but none for Components except for the obvious ‘why do need this anyway’ type of non-solutions and a fair number of unanswered questions.

Some months ago I already solved a minor piece of the puzzle by finding a way to get the Component’s Name at Design Time.

A simple ToString() and some string cutting was enough to get the value of the myserious ‘(Name)’ property. This with one limitation, the Component should not have it’s own ToString() method overriding the default one.  The following code does the trick:

1: /// <summary>

2: /// GetName() only works at Design Time.

3: /// </summary>

4: /// <returns>The Name of the Component as shown in the Designer</returns>

5: public string GetName()

6: {

7: int split = ToString().IndexOf(' ');

8: return ToString().Substring(0, split);

9: }

However nice this works on Design-Time, on Run-Time this code fails completely as the ToString() is empty or at best contains the class name of the Component.

So how do we get this Design-Time value available at Run-Time? The answer was a specially defined property that behaves differently at Design and Run-Time.

1: [DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]

2: [Browsable(false)]

3: public String ColumnName

4: {

5: get

6: {

7: //If at designTime we store GetName into fColumnName.

8: if (ToString().IndexOf(' ') != -1)

9: {

10: fColumnName = GetName();

11: }

12: return fColumnName;

13: }

14: set

15: {

16: //If at designTime we store fColumnName and ignore value.

17: if (ToString().IndexOf(' ') != -1)

18: {

19: fColumnName = GetName();

20: }

21: else

22: {

23: fColumnName = value;

24: }

25: }

26: }

27: private String fColumnName;

 

Above code will use part of the GetName() method’s code to distinguish between Design-Time (ToString()’s value is a combination of Design-Time name and ClassName separated by a space) and Run-Time where ToString() is just the ClassName.

The getter and setters work differently at Design-Time and Run-Time.

Both getter and setter set the private storage field ‘fColumnName’ at Design-Time with the value returned by GetName() and the getter off-course returns this value. So on Design-Time the setter totally ignores the ‘value’ parameter passed.

On Run-Time the getter does return the private storage field and the setter does set it with value as a normal property would do.

Finally, the attributes of the property make sure it is not visible in the property inspector but will be saved into the *.designer.cs file properly. I experimented with the ReadOnly attribute too but that only leads to the form designer not saving the property into the *.designer.cs file. If you remove the ‘Browsable’ attribute or set it to true you can see the ColumnName property at Design-Time.

Basically what happens is that the Design-Time value end up in the private storage field of a property that is saved into the *.designer.cs file. At Run-Time GetName() is totally ignored and the value retrieved from the *.designer.cs file is used as if it where a normal property.

The result is a property that changes nicely when a Component is renamed and cannot be edited at Design-Time but is serialized into the *.designer.cs file as it should.

Note: the code presented in this blog post only works on Components you write yourself and frees you from maintaining a separate Name property that has no actual link to the Components Design-Time name nor will be adjusted when renaming the Component.

Maybe someone can bolt it on with an extension method.


Static properties on design-time

Tue, 05/24/2011 - 13:49

A couple of weeks ago I was working on a component to use & generate html help files. The idea was to drop a component on each form where it would do it’s work:

  1. Generate files for Microsoft’s HtmlHelp Workshop when executed from Visual Studio.
  2. Show all kind of context sensitive help when executed outside Visual Studio.

During the writing I needed an Enable property visible in the property inspector of Visual Studio that was global to all instances of my component.

I used the following (simple) construct:

1: 

2: public Boolean Enabled {

3: get {

4: return fEnabled;

5: }

6: set {

7: fEnabled = value;

8: }

9: }

10: 

11: private static fEnabled = true;

12: 

This code shows a Enabled property in the property inspector, but when the value is of a single component instance is changed, the changed value in all other instances too.


RadioCheck for MenuToolStrip

Fri, 02/25/2011 - 16:48

Today I wanted to code a submenu with menu items that are checked like a radio button group. So once a single menu item has been checked, there is exactly one item checked all the time.

The task proved a bit harder than I expected. The RadioCheck property is no longer present in MenuToolStrip and ToolStripMenuItem as it was in the old MenuItem class.

So time to code one!

Basically the code below assumes you have a ToolStripMenuItem which contains all ToolStripMenuItems that are part of the radio button group in its DropDownItems collection.

Optionally you may check one ToolStripMenuItem as startup. Once a ToolStripMenuItem is checked there is no way to uncheck them all.

Each of these ToolStripMenuItems must have the CheckOnClick property set to true and have a event handler attached like:

1: tmi.CheckedChanged += new EventHandler(tmi_CheckedChanged);

 

The event handler code below unchecks all other ToolStripMenuItems in the DropDownItems collection in such way that one does not end up with a stack overflow exception.

1: /// <summary>

2: /// Simulate RadioCheck MenuItems.

3: /// </summary>

4: /// <param name="sender"></param>

5: /// <param name="e"></param>

6: void tmi_CheckedChanged(object sender, EventArgs e)

7: {

8: ToolStripMenuItem tmi = (ToolStripMenuItem)sender;

9: ToolStripItemCollection items = ((ToolStripMenuItem)(tmi.OwnerItem)).DropDownItems;

10: 

11: //Make sure that one menu item is always checked.

12: if (tmi.Checked)

13: {

14: //Uncheck all other menu items.

15: foreach (ToolStripMenuItem tmi2 in items)

16: {

17: if (!tmi.Equals(tmi2) && tmi2.Checked)

18: {

19: tmi2.Checked = false;

20: }

21: }

22: }

23: else

24: {

25: //If anything else is already checked,

26: //bail out to prevent a stack overflow.

27: foreach (ToolStripMenuItem tmi2 in items)

28: {

29: if (tmi2.Checked)

30: {

31: return;

32: }

33: }

34: 

35: //If nothing is checked, check ourselves again.

36: tmi.Checked = true;

37: }

38: }

/// <summary> /// Simulate RadioCheck MenuItems. /// </summary> /// <param name="sender"></param> /// <param name="e"></param> void tmi_CheckedChanged(object sender, EventArgs e) { ToolStripMenuItem tmi = (ToolStripMenuItem)sender; ToolStripItemCollection items =
((ToolStripMenuItem)(tmi.OwnerItem)).DropDownItems; //Make sure that one menu item is always checked. if (tmi.Checked) { //Uncheck all other menu items. foreach (ToolStripMenuItem tmi2 in items) { if (!tmi.Equals(tmi2) && tmi2.Checked) { tmi2.Checked = false; } } } else { //If anything else is already checked, //bail out to prevent a stack overflow. foreach (ToolStripMenuItem tmi2 in items) { if (tmi2.Checked) { return; } } //If nothing is checked, check ourselves again. tmi.Checked = true; } }

Access a local Tomcat through Apache

Wed, 02/23/2011 - 11:59

Heard a nice ‘trick’ some time ago I need to write about (so I do not forget it ;-).

I wanted to access a tomcat web application on a server that was mainly running a Drupal CMS.

As I do not like shooting holes in firewalls I dislike the use of port 8080 too. The solution I heard was both simple and elegant.

Basically one installs and configures a Tomcat server and deploy a web application on this Tomcat server listening top a port that can only be reached on the server itself (8081 in the example below).

In the Apache ‘httpd.conf’ file you enable mod_proxy by uncommenting the line:

1: LoadModule proxy_module modules/mod_proxy.so

Then you define a mapping of the tomcat URL to a URL inside the apache servers web space.

1:

2: # http://<ip>:8081//?">http://<ip>:8081/<Webapp>/<Webappurl>?<parameters>

3: # is mapped mapped to:

4: # http://<ip>/<Webappurl>?<parameters>

5:

6: ProxyPass /OAIHandler http://127.0.0.1:8081/LiLiTarget/OAIHandler

7: ProxyPassReverse /OAIHandler http://127.0.0.1:8081/LiLiTarget/OAIHandler

In above example:

  • 8081 is the local port to the Tomcat server, not reachable from the outside.
  • 127.0.0.1 is the local address of the server, not reachable from the outside.
  • OAIHandler is a URL inside the web application that does the work (and I re-used it as URL inside the Drupal webspace). URL parameters are nicely appended.
  • LiLiTarget is the name of the web application inside Tomcat’s ‘webapps’ directory.
  • The name of the web application is not used as part of the URL.

Nice side effect of this approach is that you easily swap Tomcat servers or have multiple Tomcat servers running, each with their own port number and each running only a single webapp deployed. This way you can nicely restart/maintain a single Tomcat without affecting the reset of the instances or have multiple Tomcat versions running on a single server.


JScript parameters

Wed, 12/08/2010 - 14:26

Ever wondered how to get rid of those pesky *.js.php files where you need PHP to write a single variable into a JScript?

After a lot of searching I stumbled upon a nice solution (see http://feather.elektrum.org/book/src.html) that allows you to pass query parameters to a script in it the SCRIPT’s tag’s SRC attribute just like normal html pages. The problem inside the script is that there are potentially two sets of query data, one for the script (the one we’re after) and one for the page the script is a part of. This last set is access through the usual window.location object.

1: var scripts = document.getElementsByTagName('script');

2: var myScript = scripts[ scripts.length - 1 ];

3:

4: var queryString = myScript.src.replace(/^[^\?]+\??/,'');

5:

6: var params = parseQuery( queryString );

7:

8: function parseQuery ( query ) {

9: var Params = new Object ();

10: if ( ! query ) return Params; // return empty object

11: var Pairs = query.split(/[;&]/);

12: for ( var i = 0; i < Pairs.length; i++ ) {

13: var KeyVal = Pairs[i].split('=');

14: if ( ! KeyVal || KeyVal.length != 2 ) continue;

15: var key = unescape( KeyVal[0] );

16: var val = unescape( KeyVal[1] );

17: val = val.replace(/\+/g, ' ');

18: Params[key] = val;

19: }

20: return Params;

21: }

After which you can simple use it with params[‘course’] or whatever you’re after.