migrate local docker images to a remote docker host

A. New remote host

  • install docker
  • setup simple configuration to let docker daemon be accessible from other machines: Edit /etc/default/docker, and add following line
DOCKER_OPTS="-H tcp://0.0.0.0:2375"
  • restart docker daemon
sudo service docker restart

IMPORTANT: With systemd systems (trusty and vivid), we must add new file : /etc/systemd/system/docker.service.d/docker.conf with following content:

[Service] EnvironmentFile=-/etc/default/docker ExecStart= ExecStart=/usr/bin/docker -d $DOCKER_OPTS

And then restart socker.

Now, when trying :

docker ps

we’ll got error:

Get http:///var/run/docker.sock/v1.19/containers/json: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?

That’s because docker client try to connect to file socket by default. So, from now, we must specify which host to connect to when running docker client. Like so :

docker -H localhost:2375 ps # or docker -H 127.0.0.1:2375 ps # or simply docker -H :2375 ps

Ok, but it’s annoying to specify the host all the time ! So, the solution is to define an environment variable DOCKER_HOST:

export DOCKER_HOST=0.0.0.0:2375

then we can just run :

docker ps

B. Local host

We must set environment variable DOCKER_HOST like following (assume host machien ip is 192.168.33.10)

export DOCKER_HOST=192.168.33.10:2375

C. transfer images from local docker host to remote docker host Now, to move all local images to the new docker host, we have to make it in three steps :

  1. under local machine: export images as tar files
docker -H "" save -o /tmp/myimage.tar myimage
  1. transfer tar files from local machine to the remote one (with scp for example)
scp /tmp/myimage.tar user@192.168.33.10:/tmp/
  1. under remote machine: load images
docker load -i /tmp/myimage.tar

Hope this help

nginx as reverse proxy

I’m working on dockerizing several projects with docker-compose. And, I automated checkout from git repository and building projects, One of maven projects inherits from parent pom the url of nexus repository. And unfortunately, nexus server was moved to a new url. This fails maven build, because it can’t download artifacts from old nexus url.

While I have no time to fix nexus url in parent pom and recompile all depending projects. And while I’m rushed to run and validate my docker-compose. I’m looking for rapid solution.

We can simply use hosts to redirect old url to the new one. But in my case, it’s not possible because the old url contains a port number.

old_url: http://oldurl.net:8080 new url: https://newurl.net

So, I use nginx as reverse proxy to solve problem:

Step1: install nginx

$ sudo apt-get install nginx

Step2: nginx configuration create /etc/nginx/conf.d/witr.conf with following content

server { listen 8080; server_name oldurl.net; location / { proxy_pass https://newurl.net; proxy_redirect off; } }

Step3: locally redirect server name edit /etc/hosts and add new entry

127.0.0.1 oldurl.net

Step4: restart nginx

service nginx restart

Finally : run my automated build. On download, maven will ask oldurl:8080. hosts will redirect to nginx. Nginx will redirect to newurl. And maven will be able to download artifacts and build will not fail.

known trouble when using aapt

You have trouble using aapt. And following error is raised : java.io.IOException: Cannot run program “/opt/Android/Sdk/build-tools/22.0.1/aapt”: error=2, No such file or directory. You are probably using aapt which is a 32bit application under a 64bit running ubuntu. To get it working, simply install :

sudo apt-get install lib32stdc++6 lib32z1

access to a dockerized apache from local mochine

Get ubuntu

docker pull ubuntu:vivid

Start ubuntu container

docker run -ti ubuntu:vivid /bin/bash

Install apache

root@54a954be4ca3:/# apt-get install apache2

Type CTRL+P then CTRL+Q : this will exit the shell without killing the process Now in the local machine, check out the container name

docker ps

Save the container with its apache installed (suppose your container name is happy_pasteur)

docker commit -a "witr " -m "install apache2" happy_pasteur witr/myapache

Stop the container

docker stop happy_pasteur

Start apache in new container “myapache” and bind ports

docker run -d -p 9999:80 witr/myapache /usr/sbin/apache2ctl -D FOREGROUND

Finally browse http://localhost:9999

VT-x is being used by another hypervisor .. Please disable the KVM

while starting my VM using vagrant, I have following error :

The guest machine entered an invalid state while waiting for it to boot. Valid states are 'starting, running'. The machine is in the 'poweroff' state. Please verify everything is configured properly and try again. **If the provider you're using has a GUI that comes with it, it is often helpful to open that and watch the machine**, since the GUI often has more helpful error messages than Vagrant can retrieve. For example, if you're using VirtualBox, run `vagrant up` while the VirtualBox GUI is open.

So, I edit my Vagrantfile and uncomment gui

   config.vm.provider "virtualbox" do |vb|
     vb.gui = true
   end

Then I start my vm using VirtualBox. I have the following error : VT-x is being used by another hypervisor … Please disable the KVM

what to do ?

  1. see if kvm is running and is used
witr@serv:~$ lsmod | grep kvm kvm_intel 143592 3 kvm 459835 1 kvm_intel

here kvm is used by one consumer (kvm_intel) and kvm_intel is used by 3 consumers.

  1. stop kvm consumers In my case it’s the android emulator i have launched from AndroidStudio which causes problem. I stop emulator.

  2. restart vm that’s all

extract json text

cat <file> | while read line ; do echo -e `expr match "$line" '\({.*}\)'`; done | grep "{"

save received traffic on tcp port while the port is in use

You have to save all received traffic on a tcp port. But there was one process which already listen to that port.

=> use tcpdump :

sudo tcpdump -vv -x -X  -i <interface> 'port <port>' -w <output_file>

More pretty and analysable with Wireshark, => use dumpcap :

dumpcap -n -i <interface> -f "port <port>" -w <output_file>

clean up local repo

delete branches already deleted in remote repo

witr$ git remote prune origin

list local branches and theirs on remote : helps to see which local branch references to deleted remote branch

witr$ git branch -vv

delete branch referencing to deleted remote branch

witr$ git branch -d branch-to-delete

local clean up

witr$ git clean -dfx

target busy when umount

I have mount webdav directory /mnt/wdav (more). And, now, I want to umount it. but i have following error:

umount: /mnt/wdav: target is busy (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1).)

first step : find out processes using this mount

witr$ fuser -u /mnt/wdav /mnt/wdav: 408c(witr)

second step : identify process

witr$ ps 408 PID TTY STAT TIME COMMAND 408 pts/33 Ss 0:00 /opt/appli

third step : stop process

witr$ fuser -k /mnt/wdav

last step : safe umount

witr$ sudo umount /mnt/wdav

mount webdav directory

  • install davfs2
witr$ sudo apt-get install davfs2
  • create folder
witr$ sudo mkdir /mnt/wdav
  • mount wbeddav
witr$ sudo mount.davfs https://webdav.witr.net /mnt/wdav Username: witr Password:

mount remote file systems over ssh (sshfs)

Mount remote file systems over ssh with three steps:

  1. install sshfs
witr@witr-pc:~$ sudo apt-get install sshfs
  1. create directory where you willi mount your remote file sustem
witr@witr-pc:~$ sudo mkdir /mnt/witrRemote
  1. finally, mount the remote file system
witr@witr-pc:~$ sudo sshfs witr@serv.witr.net:myRemoteFolder/ /mnt/witrRemote/

Assumes myRemoteFolder is on witr serve home directory. See warning bellow.

warn : ~ is expanded by the shell. Paths are relative on sshfs. that means : “sshfs witr@serv.witr.net:~/myRemoteFolder …” will fail with No such file or directory error.

guess ssh key passphrase

You have probably forgotten your ssh key passphrase. But you have a hunch what it might be. The simple way to check it, is to use ssh-keygen with -y argument which read private key file and print public key :

witr@witr-pc:~$ ssh-keygen -y Enter file in which the key is (/home/witr/.ssh/id_rsa): /tmp/my_private_ssh_key Enter passphrase:

If you input the correct passphrase, it will show you the associated public key. Otherwise, it will display

load failed

RabbitMQ management gui not reached

if http://server-name:15672 is not reached be sure that you enabled rabbitmq_management

witr$ rabbitmq-plugins enable rabbitmq_management

you must restart RabbitMQ server, if you wish enabling takes effect

witr$ service rabbitmq-server restart

clean system data (oracle database)

Following, a useful sql request listing system tables sizes in descending order. It helps to detect which table data could be purged to save more disk space :

select owner, segment_name, segment_type, bytes / 1024 / 1024 "size"
  from dba_segments
 where tablespace_name = 'SYSTEM'
 order by "size" desc;

Audit table SYS.AUD$ could be the cause of full disk space. Administrator must control the growth and size of the audit trail.

upgrade mongodb under ubuntu

To upgrade mongodb follow steps bellow :

$[/home/witr] sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10

$[/home/witr] echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list

$[/home/witr] sudo apt-get update

$[/home/witr] sudo apt-get install mongodb-10gen

Note While upgrading you have dpkg-deb error processing at step 4 /var/cache/apt/archives/mongodb-10gen_2.4.11_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

You must remove mongodb-clients before apt-get install :

$[/home/witr] sudo apt-get remove mongodb-clients

$[/home/witr] sudo apt-get install mongodb-10gen

Remove all references to the key before the key is to be dropped

if oracle db refuse to drop a constraint with message “Remove all references to the key before the key is to be dropped”. It’s clear that we must drop references before. But how to list these references.

Sql command bellow help to list all constraints which uses the constraint to be dropped “CONSTRAINT_TO_BE_DROPPED_NAME”

select * from all_constraints
where constraint_type='R' and 
r_constraint_name='CONSTRAINT_TO_BE_DROPPED_NAME';

when ssh connection refused with "Too many authentication failures for x"

If you are here you have probably got following message when trying to connect with ssh: “Received disconnect from xx.xx.xxx.xx: 2: Too many authentication failures for x”

In fact, when trying to connect, ssh send all locally registred keys to the server trying them one by one. The server will reject any key after too many keys have been rejected.

If you are using key to connect try :

ssh -i your_key -o 'IdentitiesOnly yes' user@server:/path/

If you are connecting without keys (login and password only) try

ssh -o 'IdentitiesOnly yes' user@server:/path/

extract git repository sub-folder as new git repository

Problem :

we have a git repository located in user@server:/home/witr/git-repos/witr.git witr.git contains 3 projects: proj1, proj2 and proj3 projects are growing up. And cloning witr.git become too slow cause of the size.

Solution : extract each project as a new git repository. Follow steps bellow to extract proj1 (do the same for other projects).

  1. on server : initialize new git repository proj1

    $ cd /home/witr/git-repos $ git init –bare proj1

  2. on local machine : extract proj1 as new git

    $ git clone user@server:/home/witr/git-repos/witr.git $ cd witr $ git branch branch-proj1 $ git filter-branch -f –subdirectory-filter path/to/proj1 branch-proj1 $ cd .. $ mkdir proj1-tmp $ cd proj1-tmp $ git init $ git pull ../witr branch-proj1 $ git remote add proj1 user@server:/home/witr/git-repos/proj1.git $ git push proj1 master

  3. on local machine : ensure that all is right

    $ cd /path/tmp/ $ git clone user@server:/home/witr/git-repos/proj1.git $ cd proj1 $ git log –pretty=format:”%an %ad %s” -10

===================== local machine commands could be automated with following bash script

#!/bin/bash

REMOTE_GIT=user@server:/home/witr/git-repos/$1.git

echo "start extract $REMOTE_GIT"
cd witr

echo "create and filter branch-$1"
git branch branch-$1
git filter-branch -f --subdirectory-filter $1 branch-$1

echo "create new local git repo branch-$1"
cd ..
mkdir branch-$1
cd branch-$1
git init
git pull ../witr branch-$1

echo "push to remote git $REMOTE_GIT"
git remote add $1 $REMOTE_GIT
git push $1 master

echo "done"

android apk NoClassDefFoundError (gradle build)

My android application is build with gradle and my apk is generated and works correctly. After some developpement iterations, i include android-support-v4.jar library to ide project and gradle build file like following:

compile files('libs/android-support-v4.jar')

Build with gradle and install apk on avd ==> NoClassDefFoundError (class which extends class located in android-support-v4.jar library)

Solution:

./gradlew clean

And it works again

onSwipe listener without using Swipe Views

  1. create SimpleGestureFilter class containing interface SimpleGestureListener

    package net.witr.swipe.ui;

    import android.app.Activity; import android.view.GestureDetector; import android.view.GestureDetector.SimpleOnGestureListener; import android.view.MotionEvent;

    public class SimpleGestureFilter extends SimpleOnGestureListener{

     public final static int SWIPE_UP    = 1;
     public final static int SWIPE_DOWN  = 2;
     public final static int SWIPE_LEFT  = 3;
     public final static int SWIPE_RIGHT = 4;
    
     public final static int MODE_TRANSPARENT = 0;
     public final static int MODE_SOLID       = 1;
     public final static int MODE_DYNAMIC     = 2;
    
     private final static int ACTION_FAKE = -13; //just an unlikely number
     private int swipe_Min_Distance = 100;
     private int swipe_Max_Distance = 1000;
     private int swipe_Min_Velocity = 0;
    
     private int mode             = MODE_DYNAMIC;
     private boolean running      = true;
     private boolean tapIndicator = false;
    
     private Activity context;
     private GestureDetector detector;
     private SimpleGestureListener listener;
    
     public SimpleGestureFilter(Activity context,SimpleGestureListener sgl) {
    
         this.context = context;
         this.detector = new GestureDetector(context, this);
         this.listener = sgl;
     }
    
     public void onTouchEvent(MotionEvent event){
    
         if(!this.running)
             return;
    
         boolean result = this.detector.onTouchEvent(event);
    
         if(this.mode == MODE_SOLID)
             event.setAction(MotionEvent.ACTION_CANCEL);
         else if (this.mode == MODE_DYNAMIC) {
    
             if(event.getAction() == ACTION_FAKE)
                 event.setAction(MotionEvent.ACTION_UP);
             else if (result)
                 event.setAction(MotionEvent.ACTION_CANCEL);
             else if(this.tapIndicator){
                 event.setAction(MotionEvent.ACTION_DOWN);
                 this.tapIndicator = false;
             }
    
         }
         //else just do nothing, it's Transparent
     }
    
     public void setMode(int m){
         this.mode = m;
     }
    
     public int getMode(){
         return this.mode;
     }
    
     public void setEnabled(boolean status){
         this.running = status;
     }
    
     public void setSwipeMaxDistance(int distance){
         this.swipe_Max_Distance = distance;
     }
    
     public void setSwipeMinDistance(int distance){
         this.swipe_Min_Distance = distance;
     }
    
     public void setSwipeMinVelocity(int distance){
         this.swipe_Min_Velocity = distance;
     }
    
     public int getSwipeMaxDistance(){
         return this.swipe_Max_Distance;
     }
    
     public int getSwipeMinDistance(){
         return this.swipe_Min_Distance;
     }
    
     public int getSwipeMinVelocity(){
         return this.swipe_Min_Velocity;
     }
    
     @Override
     public boolean onFling(MotionEvent e1, MotionEvent e2, float velocityX,
                            float velocityY) {
    
         final float xDistance = Math.abs(e1.getX() - e2.getX());
         final float yDistance = Math.abs(e1.getY() - e2.getY());
    
         if(xDistance > this.swipe_Max_Distance || yDistance > this.swipe_Max_Distance)
             return false;
    
         velocityX = Math.abs(velocityX);
         velocityY = Math.abs(velocityY);
         boolean result = false;
    
         if(velocityX > this.swipe_Min_Velocity && xDistance > this.swipe_Min_Distance){
             if(e1.getX() > e2.getX()) // right to left
                 this.listener.onSwipe(SWIPE_LEFT);
             else
                 this.listener.onSwipe(SWIPE_RIGHT);
    
             result = true;
         }
         else if(velocityY > this.swipe_Min_Velocity && yDistance > this.swipe_Min_Distance){
             if(e1.getY() > e2.getY()) // bottom to up
                 this.listener.onSwipe(SWIPE_UP);
             else
                 this.listener.onSwipe(SWIPE_DOWN);
    
             result = true;
         }
    
         return result;
     }
    
     @Override
     public boolean onSingleTapUp(MotionEvent e) {
         this.tapIndicator = true;
         return false;
     }
    
     @Override
     public boolean onDoubleTap(MotionEvent arg) {
         this.listener.onDoubleTap();;
         return true;
     }
    
     @Override
     public boolean onDoubleTapEvent(MotionEvent arg) {
         return true;
     }
    
     @Override
     public boolean onSingleTapConfirmed(MotionEvent arg) {
    
         if(this.mode == MODE_DYNAMIC){        // we owe an ACTION_UP, so we fake an
             arg.setAction(ACTION_FAKE);      //action which will be converted to an ACTION_UP later.
             this.context.dispatchTouchEvent(arg);
         }
    
         return false;
     }
    
     static interface SimpleGestureListener{
         void onSwipe(int direction);
         void onDoubleTap();
     }
    

    }

And then, your Activity must implement SimpleGestureListener interface like following

import com.android.swipe.R;
import net.witr.swipe.SimpleGestureFilter.SimpleGestureListener;
import android.app.Activity;
import android.os.Bundle;
import android.view.MotionEvent;
import android.widget.Toast;
 
public class WitrActivity extends Activity implements SimpleGestureListener{

    private SimpleGestureFilter detector;
          
    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.witr);
          
        // Detect touched area 
        detector = new SimpleGestureFilter(this,this);
    }
          
    @Override
    public boolean dispatchTouchEvent(MotionEvent me){
        // Call onTouchEvent of SimpleGestureFilter class
         this.detector.onTouchEvent(me);
       return super.dispatchTouchEvent(me);
    }

    @Override
    public void onSwipe(int direction) {

      String str = "";
      
      switch (direction) {
      
      case SimpleGestureFilter.SWIPE_RIGHT :
          str = "Swipe Right";
          break;
      case SimpleGestureFilter.SWIPE_LEFT :
          str = "Swipe Left";
          break;
      case SimpleGestureFilter.SWIPE_DOWN :
          str = "Swipe Down";
          break;
      case SimpleGestureFilter.SWIPE_UP :
          str = "Swipe Up";
          break;      
      }

      Toast.makeText(this, str, Toast.LENGTH_SHORT).show();

    }
      
    @Override
    public void onDoubleTap() {
        Toast.makeText(this, "Double Tap", Toast.LENGTH_SHORT).show();
    }
          
}

Reference : here

resume android MediaPlayer after pause

To pause MediaPlayer use :

mediaPlayer.pause();

And to resume

int pausePosition = mediaPlayer.getCurrentPosition();
mediaPlayer.seekTo(pausePosition);
mediaPlayer.start();

browse sdcard of your avd (Android Virtual Device)

you have already installed android studio or android sdk.

type

> cd $ANDROID_SDK_HOME > ./tools/ddms

If your avd is started you will see it in ddms Monitor (else go here to see how starting avd).

Select your avd and then go to menu Device->File Explorer

play your apk file on android emulator

you have already installed android studio or android sdk. you have already configure your avd (android virtual device)

1. list your AVDs

> cd $ANDROID_SDK_HOME > ./tools/android list avds

result looks like

Available Android Virtual Devices: Name: AVD_for_Nexus_S_by_Google Path: /home/witr/.android/avd/AVD_for_Nexus_S_by_Google.avd Target: Android 4.2.2 (API level 17) ABI: armeabi-v7a Skin: 480x800 --------- Name: AVD_for_Galaxy_Nexus_by_Google Path: /home/witr/.android/avd/AVD_for_Galaxy_Nexus_by_Google.avd Target: Android 4.2.2 (API level 17) ABI: armeabi-v7a Skin: 720x1280

2. start emulator of @AVD_for_Galaxy_Nexus_by_Google

> ./tools/emulator @AVD_for_Galaxy_Nexus_by_Google

or

> ./tools/android avd

Then select your avd and click start

3. install your apk file copy your apk file (e.g. myApplication.apk) in $ANDROID_SDK_HOME/platform-tools/ Once avd started, open new terminal and type

> cd $ANDROID_SDK_HOME/platform-tools > ./adb install myApplication.apk

If install succeed you will have

* daemon not running. starting it now on port 5037 * * daemon started successfully * 2594 KB/s (326422 bytes in 0.122s) pkg: /data/local/tmp/myApplication.apk Success

4. launch your application in avd go to your avd emulator and launch myApplication

checkout svn project without svn files

if you have already checkout project using “svn co” command, please go here.

Otherwise, retrieve project using:

> svn export https://svn.witr.net/repos/myproj

==> Please note that myproj, in this case, isn’t a working copy !

import many versions of project to svn repository

I have traditionally versionning my project by copying my project folder, rename the copy and continue work on it.

Now, I have several versions of my project in different folders: myproj_v1, myproj_v2, … I want to import my project version by version. So, I’ll have change log.

To import like so (e.g. to http://svn.witr.net/repos/myproj), we have to type following commands: 1. copy first version in different folder

[witr@localhost] cp -r myproj_v1 myproj

2. import to svn

[witr@localhost] svn import -m "first import v1" ./myproj http://svn.witr.net/repos/myproj

3. rsync the second version

[witr@localhost] rsync -ah myproj_v2 myproj

4. force add unversionned elements

[witr@localhost] cd myproj
[witr@localhost] svn add --force * --auto-props --parents --depth infinity -q

5. commit second version

[witr@localhost] svn ci -m "commit v2"

==> repeat steps 3, 4, 5 to continue with next versions folders

profile remote host jboss7 with jprofiler

local machine

  • create ssh tunnel : ssh -L 8849:127.0.0.1:8849 remoteuser@remotehost
  • launch jprofiler : e.g. /mnt/jprofiler7/bin/jprofiler
  • new session -> remote host -> enter host 127.0.0.1 as port 8849 -> save session (don’t start session you must accomplish remote host configuration and restart server before)

remote host (linux x64)

  • install(unzip) jprofiler in any folder in remote host file system : e.g. /opt/jprofiler
  • edit $JBOSS_HOME/bin/standlaone.sh and add following (this will launch jprofiler agent uppon jboss server. jprofiler agent will listen on port 8849)

    export JAVA_OPTS=”$JAVA_OPTS -agentpath:/opt/jprofiler/bin/linux-x64/libjprofilerti.so=port=8849”

  • restart jboss server

local machine

  • start jprofiler saved session

That’s all

remove ^M characters in vi

To remove all end of line characters (displayed as ^M), in VI type command: %s/CTRL-V CTRL-M//g

command will appear like following in VI :

:%s/^M//g

recover lost centos 5&6 root password

  1. Reboot
  2. Press any key after reboot when boot countdown, to go to GRUB menu
  3. From that menu, select the appropriate kernel version and press the ‘e’ key
  4. Then select the kernel /vmlinuz-… line and press the ‘e’ key
  5. At the end of displayed line type space and then type 1 (you can type ‘s’ or ‘single’ rather than ‘1’). Press Enter to save changes
  6. You should already be on the kernel /vmlinuz… line. Press the ‘b’ key to boot to these temporary options to allow you to recover your root password
  7. When prompted type ‘passwd’ and press enter. You will be prompted to type new password twice
  8. After typing new password and confirming it, type ‘reboot’ command.

Got from here

define specific location for mysql database

You can define different database location with simple symlink create new database witrDB

$ mysql -u witr -p > CREATE DATABASE witrDB; > quit

list all stored databases

$ sudo ls /var/lib/mysql/

witrDB must be listed stop mysql service

$ sudo service mysql stop

move witrDB to different location

$ sudo mv /var/lib/mysql/witrDB /drive/myDrive/myDatabases/

create symlink

$ sudo ln -s /drive/myDrive/myDatabases/witrDB /var/lib/mysql/witrDB

restart mysql service

$ sudo service mysql start

If mysql server jobs start fails:

  1. ensure that linux user have rights on your new database directory
$ chown youruser.yourgroup /drive/myDrive/myDatabases/witrDB
  1. look for apparmor config
$ sudo vi /etc/apparmor.d/usr.sbin.mysqld

add these two lines: /mnt/perso/mysql.perso/ r, /mnt/perso/mysql.perso/** rwk,

$ sudo service apparmor restart
  1. restart mysql service and try again
$ sudo service mysql restart

want to handle json in Java : play with javax.json

First get libraries With maven for example add these two artifacts

        <dependency>
            <groupid>javax.json</groupid>
            <artifactid>javax.json-api</artifactid>
            <version>1.0</version>
        </dependency>

        
        <dependency>
            <groupid>org.glassfish</groupid>
            <artifactid>javax.json</artifactid>
            <version>1.0.4</version>
        </dependency>

Parse json String variable (event processing)

    String json = "{"name":"witr","quotes":{"java":"150","linux":"200"}}";

    JsonParserFactory factory = Json.createParserFactory(null);
    JsonParser parser = factory.createParser(new StringReader(json));

    while (parser.hasNext()) {
        JsonParser.Event event = parser.next();

        switch (event) {
            case KEY_NAME: {
                String key = parser.getString();
                System.out.print(key + "="); break;
            }
            case VALUE_STRING: {
                String value = parser.getString();
                System.out.println(value); break;
            }
        }
    }
    
    // output :
    // name=witr
    // quotes=java=150
    // linux=200








    String json = "{"name":"witr","quotes":{"java":150,"linux":200}}";

    JsonReader jsonReader = Json.createReader(new StringReader(json));
    JsonObject jsonObject = jsonReader.readObject();
    System.out.println("name : "+jsonObject.getString("name"));
    System.out.println("quotes : "+jsonObject.getJsonObject("quotes"));
    System.out.println("java : "+jsonObject.getJsonObject("quotes").getInt("java"));
    System.out.println("linux : "+jsonObject.getJsonObject("quotes").getInt("linux"));

    // output :
    // name : witr
    // quotes : {"java":150,"linux":200}
    // java : 150
    // linux : 200

delete duplicate lines within text file

Delete duplicate lines within a text file in two steps:

  • sort file : sort command
  • extract lines and ignore duplicate ones : awk ‘!x[$0]++’
> sort input.txt | awk '!x[$0]++' >> output.txt

Hibernate exception : cannot simultaneously fetch multiple bags

Problem

Exception class : org.hibernate.loader.MultipleBagFetchException Exception : cannot simultaneously fetch multiple bags

Solution

Use @LazyCollection(LazyCollectionOption.FALSE) rather than fetch=FetchType.EAGER Annotation @LazyCollection(LazyCollectionOption.FALSE) makes that collecion is loaded like with FetchType.EAGER and you can use it on two and more collections.

Example

Initial Code : Works with lazy fetch type

public class WitrEntity{

    private Set<witrentitylabel> witrEntityLabels = new HashSet(0);

    [...]

    @OneToMany(fetch=FetchType.LAZY, mappedBy="witrEntity")
    public Set<witrentitylabel> getWitrEntityLabels() {
        return this.witrEntityLabels;
    }

}

public class WitrEntityLabel{

    private WitrEntity witrEntity;

    [...]

    @ManyToOne(fetch=FetchType.LAZY)
    @JoinColumn(name="WITR_ENTITY_ID", nullable=false, insertable=false, updatable=false)
    public WitrEntity getWitrEntity() {
        return this.witrEntity;
    }

}

Wrong manipulation : wanna set fetch type to eager

public class WitrEntity{

    private Set<witrentitylabel> witrEntityLabels = new HashSet(0);

    [...]

    @OneToMany(fetch=FetchType.EAGER, mappedBy="witrEntity") // THIS IS THE CAUSE OF EXCEPTION
    public Set<witrentitylabel> getWitrEntityLabels() {
        return this.witrEntityLabels;
    }

}

public class WitrEntityLabel{

    private WitrEntity witrEntity;

    [...]

    @ManyToOne(fetch=FetchType.LAZY)
    @JoinColumn(name="WITR_ENTITY_ID", nullable=false, insertable=false, updatable=false)
    public WitrEntity getWitrEntity() {
        return this.witrEntity;
    }

}

Solution : if you want to force loading collection

public class WitrEntity{

    private Set<witrentitylabel> witrEntityLabels = new HashSet(0);

    [...]

    @OneToMany(mappedBy="witrEntity") // delete fetch=FetchType.LAZY
    @LazyCollection(LazyCollectionOption.FALSE) // add use @LazyCollection false
    public Set<witrentitylabel> getWitrEntityLabels() {
        return this.witrEntityLabels;
    }
}

Jboss7 (jboss-cli) : how to redirect specified class log in specified file

Don’t edit standalone.xml and then restart jboss, just use jboss-cli commands. Only three steps are enough

1. Add a file log handler

/subsystem=logging/file-handler=**HANDLER_NAME**:add(file={"path"=>"**FILE_NAME**","relative-to"=>"jboss.server.log.dir"},level="**LEVEL**") > >

2. Add a log category

/subsystem=logging/logger=**CATEGORY_NAME**:add(level="**LEVEL**") > >

3. Add a log handlers to a log category

/subsystem=logging/logger=**CATEGORY_NAME**:assign-handler(name="**HANDLER_NAME**") > >

done…

By example : We have to redirect net.witr.MyClass info logs to witr.log file

[witr@WITR-PC]#cd $JBOSS_HOME
[witr@WITR-PC]#./bin/jboss-cli.sh
You are disconnected at the moment. Type 'connect' to connect to the server or 'help' for the list of supported commands.
[disconnected /] connect
[standalone@localhost:9991 /] /subsystem=logging/file-handler=WITR_HANDLER:add(file={"path"=>"witr.log","relative-to"=>"jboss.server.log.dir"},level="INFO")
[standalone@localhost:9991 /] /subsystem=logging/logger=net.witr.MyClass:add(level="INFO")
[standalone@localhost:9991 /] /subsystem=logging/logger=net.witr.MyClass:assign-handler(name="WITR_HANDLER")

password* functions before php 5.5.0

There is noway to use password_hash and password_verify and other password* functions natively with php older than 5.5.0 . You can see frustrating indication (PHP 5 >= 5.5.0) in php documentation.

If you are not the admin of php server or you don’t want to upgrade your php server right now, you are like me solved by ircmaxell who codes with contributers one file php library password.php listed bellow. Take last version found here

Simple of use : download password.php and include it in your php page and enjoy

password.php

 * @license http://www.opensource.org/licenses/mit-license.html MIT License
 * @copyright 2012 The Authors
 */

namespace {

if (!defined('PASSWORD_DEFAULT')) {

    define('PASSWORD_BCRYPT', 1);
    define('PASSWORD_DEFAULT', PASSWORD_BCRYPT);

    /**
     * Hash the password using the specified algorithm
     *
     * @param string $password The password to hash
     * @param int    $algo     The algorithm to use (Defined by PASSWORD_* constants)
     * @param array  $options  The options for the algorithm to use
     *
     * @return string|false The hashed password, or false on error.
     */
    function password_hash($password, $algo, array $options = array()) {
        if (!function_exists('crypt')) {
            trigger_error("Crypt must be loaded for password_hash to function", E_USER_WARNING);
            return null;
        }
        if (!is_string($password)) {
            trigger_error("password_hash(): Password must be a string", E_USER_WARNING);
            return null;
        }
        if (!is_int($algo)) {
            trigger_error("password_hash() expects parameter 2 to be long, " . gettype($algo) . " given", E_USER_WARNING);
            return null;
        }
        $resultLength = 0;
        switch ($algo) {
            case PASSWORD_BCRYPT:
                // Note that this is a C constant, but not exposed to PHP, so we don't define it here.
                $cost = 10;
                if (isset($options['cost'])) {
                    $cost = $options['cost'];
                    if ($cost < 4 || $cost > 31) {
                        trigger_error(sprintf("password_hash(): Invalid bcrypt cost parameter specified: %d", $cost), E_USER_WARNING);
                        return null;
                    }
                }
                // The length of salt to generate
                $raw_salt_len = 16;
                // The length required in the final serialization
                $required_salt_len = 22;
                $hash_format = sprintf("$2y$%02d$", $cost);
                // The expected length of the final crypt() output
                $resultLength = 60;
                break;
            default:
                trigger_error(sprintf("password_hash(): Unknown password hashing algorithm: %s", $algo), E_USER_WARNING);
                return null;
        }
        $salt_requires_encoding = false;
        if (isset($options['salt'])) {
            switch (gettype($options['salt'])) {
                case 'NULL':
                case 'boolean':
                case 'integer':
                case 'double':
                case 'string':
                    $salt = (string) $options['salt'];
                    break;
                case 'object':
                    if (method_exists($options['salt'], '__tostring')) {
                        $salt = (string) $options['salt'];
                        break;
                    }
                case 'array':
                case 'resource':
                default:
                    trigger_error('password_hash(): Non-string salt parameter supplied', E_USER_WARNING);
                    return null;
            }
            if (PasswordCompatbinary_strlen($salt) < $required_salt_len) {
                trigger_error(sprintf("password_hash(): Provided salt is too short: %d expecting %d", PasswordCompatbinary_strlen($salt), $required_salt_len), E_USER_WARNING);
                return null;
            } elseif (0 == preg_match('#^[a-zA-Z0-9./]+$#D', $salt)) {
                $salt_requires_encoding = true;
            }
        } else {
            $buffer = '';
            $buffer_valid = false;
            if (function_exists('mcrypt_create_iv') && !defined('PHALANGER')) {
                $buffer = mcrypt_create_iv($raw_salt_len, MCRYPT_DEV_URANDOM);
                if ($buffer) {
                    $buffer_valid = true;
                }
            }
            if (!$buffer_valid && function_exists('openssl_random_pseudo_bytes')) {
                $buffer = openssl_random_pseudo_bytes($raw_salt_len);
                if ($buffer) {
                    $buffer_valid = true;
                }
            }
            if (!$buffer_valid && @is_readable('/dev/urandom')) {
                $f = fopen('/dev/urandom', 'r');
                $read = PasswordCompatbinary_strlen($buffer);
                while ($read < $raw_salt_len) {
                    $buffer .= fread($f, $raw_salt_len - $read);
                    $read = PasswordCompatbinary_strlen($buffer);
                }
                fclose($f);
                if ($read >= $raw_salt_len) {
                    $buffer_valid = true;
                }
            }
            if (!$buffer_valid || PasswordCompatbinary_strlen($buffer) < $raw_salt_len) {
                $bl = PasswordCompatbinary_strlen($buffer);
                for ($i = 0; $i < $raw_salt_len; $i++) {
                    if ($i < $bl) {
                        $buffer[$i] = $buffer[$i] ^ chr(mt_rand(0, 255));
                    } else {
                        $buffer .= chr(mt_rand(0, 255));
                    }
                }
            }
            $salt = $buffer;
            $salt_requires_encoding = true;
        }
        if ($salt_requires_encoding) {
            // encode string with the Base64 variant used by crypt
            $base64_digits =
                'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';
            $bcrypt64_digits =
                './ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';

            $base64_string = base64_encode($salt);
            $salt = strtr(rtrim($base64_string, '='), $base64_digits, $bcrypt64_digits);
        }
        $salt = PasswordCompatbinary_substr($salt, 0, $required_salt_len);

        $hash = $hash_format . $salt;

        $ret = crypt($password, $hash);

        if (!is_string($ret) || PasswordCompatbinary_strlen($ret) != $resultLength) {
            return false;
        }

        return $ret;
    }

    /**
     * Get information about the password hash. Returns an array of the information
     * that was used to generate the password hash.
     *
     * array(
     *    'algo' => 1,
     *    'algoName' => 'bcrypt',
     *    'options' => array(
     *        'cost' => 10,
     *    ),
     * )
     *
     * @param string $hash The password hash to extract info from
     *
     * @return array The array of information about the hash.
     */
    function password_get_info($hash) {
        $return = array(
            'algo' => 0,
            'algoName' => 'unknown',
            'options' => array(),
        );
        if (PasswordCompatbinary_substr($hash, 0, 4) == '$2y$' && PasswordCompatbinary_strlen($hash) == 60) {
            $return['algo'] = PASSWORD_BCRYPT;
            $return['algoName'] = 'bcrypt';
            list($cost) = sscanf($hash, "$2y$%d$");
            $return['options']['cost'] = $cost;
        }
        return $return;
    }

    /**
     * Determine if the password hash needs to be rehashed according to the options provided
     *
     * If the answer is true, after validating the password using password_verify, rehash it.
     *
     * @param string $hash    The hash to test
     * @param int    $algo    The algorithm used for new password hashes
     * @param array  $options The options array passed to password_hash
     *
     * @return boolean True if the password needs to be rehashed.
     */
    function password_needs_rehash($hash, $algo, array $options = array()) {
        $info = password_get_info($hash);
        if ($info['algo'] != $algo) {
            return true;
        }
        switch ($algo) {
            case PASSWORD_BCRYPT:
                $cost = isset($options['cost']) ? $options['cost'] : 10;
                if ($cost != $info['options']['cost']) {
                    return true;
                }
                break;
        }
        return false;
    }

    /**
     * Verify a password against a hash using a timing attack resistant approach
     *
     * @param string $password The password to verify
     * @param string $hash     The hash to verify against
     *
     * @return boolean If the password matches the hash
     */
    function password_verify($password, $hash) {
        if (!function_exists('crypt')) {
            trigger_error("Crypt must be loaded for password_verify to function", E_USER_WARNING);
            return false;
        }
        $ret = crypt($password, $hash);
        if (!is_string($ret) || PasswordCompatbinary_strlen($ret) != PasswordCompatbinary_strlen($hash) || PasswordCompatbinary_strlen($ret) <= 13) {
            return false;
        }

        $status = 0;
        for ($i = 0; $i < PasswordCompatbinary_strlen($ret); $i++) {
            $status |= (ord($ret[$i]) ^ ord($hash[$i]));
        }

        return $status === 0;
    }
}

}

namespace PasswordCompatbinary {
    /**
     * Count the number of bytes in a string
     *
     * We cannot simply use strlen() for this, because it might be overwritten by the mbstring extension.
     * In this case, strlen() will count the number of *characters* based on the internal encoding. A
     * sequence of bytes might be regarded as a single multibyte character.
     *
     * @param string $binary_string The input string
     *
     * @internal
     * @return int The number of bytes
     */
    function _strlen($binary_string) {
           if (function_exists('mb_strlen')) {
               return mb_strlen($binary_string, '8bit');
           }
           return strlen($binary_string);
    }

    /**
     * Get a substring based on byte limits
     *
     * @see _strlen()
     *
     * @param string $binary_string The input string
     * @param int    $start
     * @param int    $length
     *
     * @internal
     * @return string The substring
     */
    function _substr($binary_string, $start, $length) {
       if (function_exists('mb_substr')) {
           return mb_substr($binary_string, $start, $length, '8bit');
       }
       return substr($binary_string, $start, $length);
   }

}

Jboss7 : useful jboss-cli commands

jboss7 offer jboss-cli (command line interface) which makes live easier for administrators.

First of all connect to jboss-cli as following

connect to jboss-cli : connect

[witr@WITR-PC]#cd $JBOSS_HOME
[witr@WITR-PC]#./bin/jboss-cli.sh
You are disconnected at the moment. Type 'connect' to connect to the server or 'help' for the list of supported commands.
[disconnected /] connect
[standalone@localhost:9991 /]

Then, here some useful jboss-cli commands

undeploy and redeploy artifacts : deploy/undeploy

[standalone@localhost:9991 /] undeploy witr-4.2.15.war
[standalone@localhost:9991 /] deploy standalone/deployments/witr-4.2.15.war

list already deployed artifacts : deploy -l

[standalone@localhost:9991 /] deploy -l
NAME                          RUNTIME-NAME                  ENABLED STATUS 
wReport-4.0.13.war            wReport-4.0.13.war            true    OK     
witr-4.2.15.war               witr-4.2.15.war               true    OK     
witr-ws-4.0.1.jar             witr-ws-4.0.1.jar             true    OK     
[standalone@localhost:9991 /] 

handle log levels : /subsystem=logging

Add new log level “INFO” on class net.witr.home.Himmel

[standalone@localhost:9991 /] /subsystem=logging/logger=net.witr.home.Himmel:add(level=INFO)
{"outcome" => "success"}
[standalone@localhost:9991 /] 

Change log level of class net.witr.home.Himmel to “DEBUG”

[standalone@localhost:9991 /] /subsystem=logging/logger=net.witr.home.Himmel:change-log-level(level=DEBUG)
{"outcome" => "success"}
[standalone@localhost:9991 /] 

Remove log level of class net.witr.home.Himmel

[standalone@localhost:9991 /] /subsystem=logging/logger=net.witr.home.Himmel:remove
{"outcome" => "success"}
[standalone@localhost:9991 /] 

handle system properties : /system-property Add, read and remove system property witrActive

[standalone@localhost:9991 /] /system-property=witrActive:add(value=Y)
{"outcome" => "success"}
[standalone@localhost:9991 /] /system-property=witrActive:read-resource
{
    "outcome" => "success",
    "result" => {"value" => "Y"}
}
[standalone@localhost:9991 /] /system-property=witrActive:remove
{"outcome" => "success"}

XSLT java debug

You may need some debug message to be logged in standard output

java debug class

package net.witr.xslt;

public class XslDebug {

    public static void debug(String msg){
        System.out.println(msg);
    }

}

xsl code

<xsl:stylesheet xmlns:witrdebug="net.witr.xslt.XslDebug">
  ..
  <xsl:template match="DIV">
    <xsl:variable name="debugName" select="name(.)"></xsl:variable>
    <xsl:variable name="debugStyle" select="@style"></xsl:variable>
    <xsl:value-of select="witrDebug:debug(concat('Matched ',$debugName,'; style :',$debugStyle))"></xsl:value-of>
  </xsl:template>
  ..
</xsl:stylesheet>

Apache POI : delete excel empty rows

POI method sheet.removeRow(), removes row cells but doesn’t clear them from excel output. Means if you have three rows and you apply sheet.removeRow(1), you will have:

  • not changed first row
  • empty second row
  • not changed third row

So to remove row completly you must then shift rows following. But for some reason sheet.shiftRows() poi api method doesn’t work for me. Code bellow do it

    public void deleteEmptyRows(){
        SXSSFSheet sheet = (SXSSFSheet)sh;
        for ( int r=sheet.getLastRowNum(); r >= 0; r-- ){
            Row row     = sheet.getRow( r );

            // if no row exists here; then nothing to do; next!
            if ( row == null )
                continue;

            int lastColumn = row.getLastCellNum();
            boolean rowToDelete = true;
            if(lastColumn > -1){
                for ( int x=0; x < lastColumn + 1; x++ ){
                    Cell cell    = row.getCell(x);
                    if ( cell != null && cell.getStringCellValue() != null){
                        String cellTrimValue = cell.getStringCellValue().trim();
                        if(!cellTrimValue.isEmpty()){
                            rowToDelete = false;
                            break;
                        }
                    }
                }
            }

            if(rowToDelete){
                if(r == sheet.getLastRowNum()){
                    sheet.removeRow(row);
                }else{
                    sheet.removeRow(row);
                    for(int j= r+1; j <= sheet.getLastRowNum(); j++){
                        Row rowToShift = sheet.getRow(j);
                        rowToShift.setRowNum(j-1);
                    }
                }
            }
        }
    }

Apache POI : delete list of excel columns / keep only list of excel columns

First you must got code from here

Remove some columns with provided headers

    public void deleteColumnsWithHeader(String columnHeader){
        SXSSFSheet sheet = (SXSSFSheet)sh;
        Row row     = sheet.getRow( 0 );
        if ( row == null ){
            return;
        }

        int lastColumn = row.getLastCellNum();

        for ( int x=lastColumn; x >= 0; x-- ){
            Cell headerCell    = row.getCell(x);
            if ( headerCell != null && headerCell.getStringCellValue() != null && 
                 headerCell.getStringCellValue().equalsIgnoreCase(columnHeader)){
                deleteColumn(x);
            }
        }
    }

Keep only columns with provided headers

    public void keepOnlyColumnsWithHeaders(List<string> columnHeaders){
        SXSSFSheet sheet = (SXSSFSheet)sh;
        Row row     = sheet.getRow( 0 );
        if ( row == null ){
            return;
        }

        int lastColumn = row.getLastCellNum();

        for ( int x=lastColumn; x >= 0; x-- ){
            Cell headerCell    = row.getCell(x);
            if ( headerCell != null && headerCell.getStringCellValue() != null && 
                 !columnHeaders.contains(headerCell.getStringCellValue())){
                deleteColumn(x);
            }
        }
    }

Apache POI : remove excel columns

Unfortunatly there was no method to remove excel columns in poi api. Code bellow (found here) can help

    public void deleteColumn(SXSSFSheet sheet, int columnToDelete){        
        int maxColumn = 0;
        for ( int r=0; r < sheet.getLastRowNum()+1; r++ ){
            Row row     = sheet.getRow(r);

            // if no row exists here; then nothing to do; next!
            if ( row == null )
                continue;

            // if the row doesn't have this many columns then we are good; next!
            int lastColumn = row.getLastCellNum();
            if ( lastColumn > maxColumn )
                maxColumn = lastColumn;

            if ( lastColumn < columnToDelete )
                continue;

            for ( int x=columnToDelete+1; x < lastColumn + 1; x++ ){
                Cell oldCell    = row.getCell(x-1);
                if ( oldCell != null )
                    row.removeCell( oldCell );

                Cell nextCell   = row.getCell( x );
                if ( nextCell != null ){
                    Cell newCell    = row.createCell( x-1, nextCell.getCellType() );
                    cloneCell(newCell, nextCell);
                }
            }
        }
    }

    private void cloneCell( Cell cNew, Cell cOld ){
        cNew.setCellComment( cOld.getCellComment() );
        cNew.setCellStyle( cOld.getCellStyle() );

        switch ( cNew.getCellType() ){
            case Cell.CELL_TYPE_BOOLEAN:{
                cNew.setCellValue( cOld.getBooleanCellValue() );
                break;
            }
            case Cell.CELL_TYPE_NUMERIC:{
                cNew.setCellValue( cOld.getNumericCellValue() );
                break;
            }
            case Cell.CELL_TYPE_STRING:{
                cNew.setCellValue( cOld.getStringCellValue() );
                break;
            }
            case Cell.CELL_TYPE_ERROR:{
                cNew.setCellValue( cOld.getErrorCellValue() );
                break;
            }
            case Cell.CELL_TYPE_FORMULA:{
                cNew.setCellFormula( cOld.getCellFormula() );
                break;
            }
        }

    }

load css with javascript

Following javascript function loads css file if it’s not already loaded

function loadIfNotYetLoaded(cssFilePathName){
	cssLoaded = false;
	var ss = document.styleSheets;
	for (var i = 0, max = ss.length; i < max; i++) {
		var sheet = ss[i];
		if(sheet.href.indexOf(cssFilePathName) != -1){
			cssLoaded = true;
			break;
		}
	}

	if(!cssLoaded){		
		var link = document.createElement("link");
		link.rel = "stylesheet";      
		link.href = cssFilePathName;
		document.getElementsByTagName("head")[0].appendChild(link);		
	}
}

Oracle DB : Beware when specifying column max length

To create table STUDENT with single column NAME which must not excceed 6 characters, you may type the following

CREATE TABLE STUDENT
  (
    NAME VARCHAR2 (6)
  );

But this is wrong if NLS_LENGTH_SEMANTICS=BYTE (default value) : Typing VARCHAR2(6) means 6 bytes and not 6 characters. In this case “Claude” will be accepted but not “Noémie”, because ‘é’ is encoded with 2 bytes and the sum will exceed 6 bytes.

The right way to specify column max length is as follow

CREATE TABLE STUDENT
  (
    NAME VARCHAR2 (6 CHAR)
  );

NLS_LENGTH_SEMANTICS enables you to create CHAR and VARCHAR2 columns using either byte or character length semantics. Existing columns are not affected.

NCHAR, NVARCHAR2, CLOB, and NCLOB columns are always character-based. You may be required to use byte semantics in order to maintain compatibility with existing applications.

NLS_LENGTH_SEMANTICS does not apply to tables in SYS and SYSTEM. The data dictionary always uses byte semantics.

Get and/or set NLS_LENGTH_SEMANTICS value

-- get NLS_LENGTH_SEMANTICS value
SELECT * FROM NLS_database_PARAMETERS WHERE PARAMETER = 'NLS_LENGTH_SEMANTICS';
-- set NLS_LENGTH_SEMANTICS to CHAR
ALTER system SET NLS_LENGTH_SEMANTICS=CHAR; -- restart to take effect

javascript : scroll window or container to view element

prototypsjs framework is used for that.

  • scroll container to view element inside : scrollTo(myElement, divContainer);
  • scroll window to view element inside : scrollTo(myElement, null);

    scrollTo(myElement, scrollContainer){ if(scrollContainer){ myElement.scrollIntoView(); }else{ currentWindowScrollY = document.documentElement.scrollTop; targetWindowScrollY = Element.cumulativeOffset(myElement)[1]-window.height()+myElement.getHeight(); if(targetWindowScrollY>currentWindowScrollY){ window.scrollTo(window.scrollX, targetWindowScrollY); } } }

checkout single file from svn repo

svn co doesn’t permit checkout files but directories. use rather :

svn export https://svn.repo/path/to/file/witr.png /tmp/witr.png

another alternative for text files

svn cat https://svn.repo/path/to/file/witr.txt > /tmp/witr.txt

disk analyzer : how to ignore mounts

under ubuntu the default disk analyser (baobab) don’t offer ignore directories option. In the case of we would like analyse the root folder / and ignore mounts in /media for example the easy way is to :

  • create new temporary folder,
  • mount the root filesystem on it,
  • analyse the temporary folder
  • unmount and delete it

So, first proceed like following

> sudo mkdir tmpAnalysis > sudo mount /dev/sda1 tmpAnalysis > baobab tmpAnalysis

Analysis is ready to explore. Then

> sudo umount tmpAnalysis > sudo rm -rf tmpAnalysis

list html page elements not having id

Icefaces ajax render performances issues could be sequence of dom elements without id. The present post helps to list all elements not having id to be fixed.

Open your page in your browser. In developer console create following function

function chkid(ch){
  if(ch.id==""){
    console.log(ch.tagName+' : '+ch.id + ' childof ==> ' + ch.up().id)
  }
  ch.childElements().each(
    function(fils){
      chkid(fils)
    }
  );
}

then call chkid with any dom element. To get all, call chkid with body element

$$('body')[0].childElements().each(function(ch){chkid(ch)});

Nota : Here we use prototypejs framework

ajax displaying jboss server state

We aim to have web page which displays continually three jboss7 servers states. For this we need the shell script (status.sh) to be found here.

environment requirement

servers in same machine :

  • apache server

  • three jboss7 servers: jboss1, jboss2, jboss3

get result of status.sh in our php file

Following php code displays one jboss server state

echo exec('/path/to/script/status.sh /path/to/jboss/home')

For our example we need three php files containing the previous one line php code corresponding to each one of the three jboss server : state-jboss1.php, state-jboss2.php, and state-jboss3.php

ajax state display

We need here the load() method of jquery javascript framework.

  <span id="jboss"></span><br></br>

  <script language="javascript">
      $('#jboss').load('state-jboss.php');
  </script>

continuous update of servers state

We’ll resort to javascript timer

setInterval(function(){$('#jboss').load('state-jboss.php')},3000);

deloyment

root folder :

  • create folder jboss-state in apache server root www

files :

  • www/jboss-state/status.sh

  • www/jboss-state/index.html

  • www/jboss-state/state-jboss1.php

  • www/jboss-state/state-jboss2.php

  • www/jboss-state/state-jboss3.php

full code

status.sh (myArtifactName string in code must be replaced by the right artifact name)

#!/bin/bash

res=$($1/bin/jboss-cli.sh --commands="connect,read-attribute server-state" 2>&1)

if [[ $res != 'running' ]]
then
        echo "STOPPED ($res)"
else
        artifactDeployed=$($1/bin/jboss-cli.sh --commands="connect,deploy -l" | grep myArtifactName | grep OK  2>&1)
        if [[ X$artifactDeployed == 'X' ]]
        then
                echo "STARTING ($res / $artifactDeployed)"
        else
                echo "RUNNING ($res / $artifactDeployed)"
        fi
fi

state-jboss1.php

state-jboss2.php

state-jboss3.php

index.html

<html>
  <body>
    Jboss1 state : <span id="jboss1"></span><br></br>
    Jboss2 state : <span id="jboss2"></span><br></br>
    Jboss3 state : <span id="jboss3"></span><br></br>

    <script language="javascript">
        setInterval(function(){$('#jboss1').load('state-jboss1.php')},3000);
        setInterval(function(){$('#jboss2').load('state-jboss2.php')},3000);
        setInterval(function(){$('#jboss3').load('state-jboss3.php')},3000);
    </script>
  </body>
</html>

get jboss7 server state with shell script

Shell script (status.sh) bellow returns three jboss server state :

  • STOPPED : jboss is stopped (script could not connect with jboss-cli)

  • STARTING : jboss is starting (its state is running but project is not deployed)

  • RUNNING : jboss is ready (its state is running and project is deployed successfully)

Use :

> ./status.sh /path/to/jbossHome

This script may be improved … Also you need to replace in code myArtifactName fake value by your right value

#!/bin/bash

res=$($1/bin/jboss-cli.sh --commands="connect,read-attribute server-state" 2>&1)

if [[ $res != 'running' ]]
then
        echo "STOPPED ($res)"
else
        artifactDeployed=$($1/bin/jboss-cli.sh --commands="connect,deploy -l" | grep myArtifactName | grep OK  2>&1)
        if [[ X$artifactDeployed == 'X' ]]
        then
                echo "STARTING ($res / $artifactDeployed)"
        else
                echo "RUNNING ($res / $artifactDeployed)"
        fi
fi

hbm2java : specify generated beans package

To specify the package of generated java beans in hbm2java ant task, add jdbcconfiguration tag property “packagename” like following :

<jdbcconfiguration revengfile="reveng.xml" packagename="net.witr.hbm2java" configurationfile="hibernate.cfg.xml"></jdbcconfiguration>

hbm2java and names of tables

if no bean was generated with hbm2java, be sure that names of tables in reveng.xml matches names of database tables. table names are case sensitive at least with oracle database

hbm2java : hibernate annotations missed

If the build.xml used to generate hibernate beans from DB produces POJO beans without annotations, then add hbm2java tag property “ejb3” :

   ...
   <hbm2java ejb3="true"></hbm2java>
   ...

hbm2java with ant step by step

  • workspace structure:
  • hbm/
  • build.xml
  • hibernate.cfg.xml
  • reveng.xml
  • lib/

  • build.xml

  • hibernate.cfg.xml

    true oracle.jdbc.driver.OracleDriver jdbc:oracle:thin:@127.0.0.1:1522:WITRDB witr witr org.hibernate.dialect.Oracle10gDialect false
  • reveng.xml

    ...etc

then type

> cd hbm
> ant

your hibernate beans are ready in hbm/generated directory

create words cloud with worditout.com

http://worditout.com/word-cloud/make-a-new-one

  • type original text

  • edit words list

  • specify width and height of generated image

  • specify colors and sizes of generated words

  • regenerate output

worditout generates for me my resume cloud words: words cloud

Wordpress changes don't take effect immediately

Changes made on php files or appearance customization don’t take effect immediately, if the wordpress cache plugin is not disabled.

Edit wp-config.php and disable cache :

define('WP_CACHE', false); //Added by WP-Cache Manager

Cache plugins are very helpfull. Keep cache DISABLED until you are finished any customization work.

nexus exception : Cannot construct org.codehaus.plexus.util.xml.Xpp3Dom as it does not have a no-args constructor

Sonatype Nexus deployed on tomcat 6. At starting catalina raises following exception ($TOMCAT_HOME/log/catalina.out)

2013-12-09 09:36:09 WARN  - o.c.p.PlexusContain~          - Error starting: class org.sonatype.nexus.DefaultNexus org.codehaus.plexus.personality.plexus.lifecycle.phase.StartingException: Could not start Nexus!         at org.sonatype.nexus.DefaultNexus.start(DefaultNexus.java:663)         at org.codehaus.plexus.PlexusLifecycleManager.start(PlexusLifecycleManager.java:303)         at org.codehaus.plexus.PlexusLifecycleManager.manageLifecycle(PlexusLifecycleManager.java:254)         at org.codehaus.plexus.PlexusLifecycleManager.manage(PlexusLifecycleManager.java:154)         at org.sonatype.guice.plexus.binders.PlexusBeanBinder.afterInjection(PlexusBeanBinder.java:78) [...] Caused by: com.thoughtworks.xstream.converters.ConversionException: Cannot construct org.codehaus.plexus.util.xml.Xpp3Dom as it does not have a no-args constructor : Cannot construct org.codehaus.plexus.util.xml.Xpp3Dom as it does not have a no-args constructor ---- Debugging information ---- message             : Cannot construct org.codehaus.plexus.util.xml.Xpp3Dom as it does not have a no-args constructor cause-exception     : com.thoughtworks.xstream.converters.reflection.ObjectAccessException cause-message       : Cannot construct org.codehaus.plexus.util.xml.Xpp3Dom as it does not have a no-args constructor class               : org.sonatype.nexus.configuration.model.CRepository required-type       : org.codehaus.plexus.util.xml.Xpp3Dom path                : /org.sonatype.nexus.configuration.model.CRepository/externalConfiguration line number         : 23 -------------------------------

It happens because the version of nexus is not working correctly with java7. Must use java6 alternative.
Edit $TOMCAT_HOME/bin/setclasspath.sh
and add at the begining simple export env variable JAVA_HOME locating java6 home.

get last 10 log of several svn projects

We have 10 projects checked out from svn repo:
/workspace/proj1
/workspace/proj2
/workspace/proj3
… etc

get last 10 log of several svn projects with shell comand line :

ls -1 | while read proj; do echo "============$proj"; svn log -l 10 $proj; done

Interactive Ruby : simplify your computing

Interactive Ruby helps you to make complex computations without having to write a whole program.
Under linux type irb.

Ex1. calculate (10-7+500)*(4 power 3 )-square root of(900)

irb(main):001:0> (10-7+500)*(4**3)-Math.sqrt(900) => 32162.0

Ex2. get exact date at 14 days before now (in seconds 14x24x60x60 before now)

irb(main):002:0> Time.new - (14*24*60*60) => Thu Nov 21 15:52:53 +0100 2013

upgrade sonar 3 to 4

so easy as 5 simple steps :

  • Download and unzip new sonarqube-4.0.zip : > unzip sonarqube-4.0.zip -d $NEW_SONAR_HOME
  • Manually edit sonarqube-4.0/conf/sonar.properties and sonarqube-4.0/conf/wrapper.conf and copy old properties values from old sonar conf
  • Copy extensions from old sonar to the new one (only extensions not having correspondent in the new sonar)
  • stop old sonar and start the new one :

    $OLD_SONAR_HOME/bin/yourOS/sonar.sh stop
    $NEW_SONAR_HOME/bin/yourOS/sonar.sh start

  • Go to http://localhost:9000/setup (assuming sonar listen in localhost) and finally click under upgrade button

losing sonar admin password

by default sonar creates admin account : user: admin, password : admin
when sonar admin password is changed and lost, simple way to reinitialize password to admin is with a simple sql command :

musql -u sonar -p
mysql> use sonar
mysql> update users set crypted_password = ‘88c991e39bb88b94178123a849606905ebf440f5’, salt=’6522f3c5007ae910ad690bb1bdbf264a34884c6d’ where login = ‘admin’
that’s all.

select one of java alternatives in ubuntu

type command : update-alternatives –config java
This will list all java versions. Type the number of desired version then type Enter to be selected
check new version with command : java -verison

glogg

simple way to navigate in your big log files
to install type : apt-get install glogg

ajax loader with APEX

1. Prerequisites in plsql oracle packages

1.1. Stored procedure GENERATE_REPORT

procedure GENERATE_REPORT(p_report_id in number) is
begin
	--------------
	-- long time process to generate report
	-- blabla
	-- babla
	----------------
	
	update REPORT
		set STATUS = 'FINISH'
	where REPORT_ID = p_report_id;
	
	exception when others then
		update REPORT
			set STATUS = 'ERROR'
		where REPORT_ID = p_report_id;	
end;

1.2. Stored function START_GENERATE_REPORT

function START_GENERATE_REPORT() return number is
	l_report_id number;
	l_dummy     number;
begin
	l_report_id := SEQ_REPORT.nextval;
	insert into 
	REPORT(REPORT_ID,STATUS)
	VALUES(l_report_id,'GENRATING');
	
	DBMS_JOB.SUBMIT(l_dummy,'begin GENERATE_REPORT('||l_report_id||'); end;');
	
	return l_report_id;
end;

1.3. Stored function GET_REPORT_STATUS

function GET_REPORT_STATUS(p_report_id in number) return varchar2 is
 l_status varchar2(100);
begin
	select STATUS into l_status
		from REPORT
	where REPORT_ID = p_report_id;
	
	return l_status;
	
	exception when others then
		return 'ERROR';
end;

2. APEX application

HIDDEN TEXT ELEMENT ‘PAGE_REPORT_ID’

BUTTON ELEMENT ‘generate’: button click submits page

PAGE PROCESS ELEMENT ‘processGenerate’: activated on ‘generate’ button click

declare
  l_report_id number;
begin
  l_report_d := START_GENERATE_REPORT();
  :PAGE_REPORT_ID := l_report_d;
end;

HTML FORM ELEMENT ‘loader’

  • condition : PAGE_REPORT_ID content is not null
  • content :

APPLICATION PROCESS ELEMENT : PROCESS_REPORT_STATUS
In apex go to application process and create new process. be sure you select “On Demand” as Point property of your process. 

declare
	l_param_report_id varchar2(100);
	l_result_status   varchar2(100);
begin
	owa_util.mime_header('text', FALSE );
	htp.p('Cache-Control: no-cache');
	htp.p('Pragma: no-cache');
	owa_util.http_header_close;	
	
	l_param_report_id := wwv_flow.g_x01;
	l_result_status := GET_REPORT_STATUS(to_number(l_param_report_id));
	htp.prn(l_result_status);
end;

IN PAGE HEADER
put javascript following javascript function :

function refreshReportLoader(){
	var get = new htmldb_Get(null, $v('pFlowId'), 'APPLICATION_PROCESS=PROCESS_REPORT_STATUS',0);	
	get.addParam('x01',$v('P_REPORT_ID'));
	gStatus = get.get('TEXT');

	var loader = document.getElementById("divLoader");
	loader.innerHTML= gStatus ;
	
	if(!gStatus == 'FINISH' && !gStatus == 'ERROR'){
		 setTimeout(function(){refreshReportLoader()},3000);
	}
}

3. how to use

click on button ‘generate’ and see the loader div which must change content according to REPORT.STATUS

wireshark under ubuntu

  1. configure network of cirtual Machine
    in VirtualBox –> settings of virtualMachine(ubuntu) –> Network –> set Promiscuous Mode to allow All
  2. start virtual Machine ubuntu
  3. install wireshark

    sudo apt-get install wireshark

  4. enable root privilege to dumpcap

    sudo setcap ‘CAP_NET_RAW+eip CAP_NET_ADMIN+eip’ /usr/bin/dumpcap

  5. that’s all

Secure Shell Client under Windows7 crashes when adding tunnel

With the gui of Secure Shell client under windows7, when you try to add outgoing tunnel it crashes and windows tell you that program will be ended.

Solution :

  1. create an incoming tunnel with your ssh profile :
        Menu profiles –> edit profiles
        Then choose “Tunneling” tab and finally choose “Incoming” tab
        Save
  2. go to C:UserswitrApplication DataSSH and edit your profile XXX.ssh2 with simple text editor
    you must find your declared incoming tunnel like so :

    …. [Outgoing Tunnels]

    [Incoming Tunnels] Tunnel=S:witrTunnel,1521,localhost,1521,0,tcp ….

  3. move declared incoming tunnel to outgoing tunnels section and save
  4. reconnect with Secure Shell Client and now must be go.

Use colsure (javascript code analyser)

First of all you must have easy_install python module : cf. get-easyinstall-python-module

Install closure linter :

cd /tmp
sudo easy_install http://closure-linter.googlecode.com/files/closure_linter-latest.tar.gz
Downloading http://closure-linter.googlecode.com/files/closure_linter-latest.tar.gz Processing closure_linter-latest.tar.gz Writing /tmp/easy_install-Xt6LOk/closure_linter-2.3.11/setup.cfg Running closure_linter-2.3.11/setup.py -q bdist_egg --dist-dir /tmp/easy_install-Xt6LOk/closure_linter-2.3.11/egg-dist-tmp-w7sS6l zip_safe flag not set; analyzing archive contents... Adding closure-linter 2.3.11 to easy-install.pth file Installing fixjsstyle script to /usr/local/bin Installing gjslint script to /usr/local/bin Installed /usr/local/lib/python2.7/dist-packages/closure_linter-2.3.11-py2.7.egg Processing dependencies for closure-linter==2.3.11 Searching for python-gflags Reading https://pypi.python.org/simple/python-gflags/ Reading http://code.google.com/p/python-gflags Best match: python-gflags 2.0 Downloading https://pypi.python.org/packages/source/p/python-gflags/python-gflags-2.0.tar.gz#md5=23c9a793959a54971b1f094b0c6d03b1 Processing python-gflags-2.0.tar.gz Writing /tmp/easy_install-SlgYwJ/python-gflags-2.0/setup.cfg Running python-gflags-2.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-SlgYwJ/python-gflags-2.0/egg-dist-tmp-kTyloa zip_safe flag not set; analyzing archive contents... Adding python-gflags 2.0 to easy-install.pth file Installed /usr/local/lib/python2.7/dist-packages/python_gflags-2.0-py2.7.egg Finished processing dependencies for closure-linter==2.3.11

Once finished you can use closure linter as following :

Analyse one js script file :

gjslint path/to/my/file.js

Analyse entire directory :

gjslint -r path/to/my/directory

more details here : https://developers.google.com/closure/utilities/docs/linter_howto

get easy_install python module

Install command without root privileges

wget https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py -O - | python

Output logs of install

--2013-08-01 10:28:48--  https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py Résolution de bitbucket.org (bitbucket.org)... 131.103.20.168, 131.103.20.167 Connexion vers bitbucket.org (bitbucket.org)|131.103.20.168|:443... connecté. requête HTTP transmise, en attente de la réponse... 200 OK Longueur: 8815 (8,6K) [text/plain] Sauvegarde en : «STDOUT» 100%[==============================>] 8 815       --.-K/s   ds 0,001s 2013-08-01 10:28:50 (10,3 MB/s) - envoi vers sortie standard [8815/8815] Downloading https://pypi.python.org/packages/source/s/setuptools/setuptools-0.9.8.tar.gz Extracting in /tmp/tmpj5lJs_ Now working in /tmp/tmpj5lJs_/setuptools-0.9.8 Installing Setuptools running install error: can't create or remove files in install directory The following error occurred while trying to add or remove files in the installation directory:     [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/test-easy-install-4089.write-test' The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was:     /usr/local/lib/python2.7/dist-packages/ Perhaps your account does not have write access to this directory?  If the installation directory is a system-owned directory, you may need to sign in as the administrator or "root" account.  If you do not have administrative access to this machine, you may wish to choose a different installation directory, preferably one that is listed in your PYTHONPATH environment variable. For information on other options, you may wish to consult the documentation at:   https://pythonhosted.org/setuptools/easy_install.html Please make the appropriate changes for your system and try again. Something went wrong during the installation. See the error message above.

Install command with root privileges

sudo wget https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py -O - | sudo python

Output logs of install

--2013-08-01 10:36:50--  https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py Résolution de bitbucket.org (bitbucket.org)... 131.103.20.168, 131.103.20.167 Connexion vers bitbucket.org (bitbucket.org)|131.103.20.168|:443... connecté. requête HTTP transmise, en attente de la réponse... 200 OK Longueur: 8815 (8,6K) [text/plain] Sauvegarde en : «STDOUT» 100%[============================================================================================================================================================================================================================>] 8 815       --.-K/s   ds 0,001s 2013-08-01 10:36:51 (8,82 MB/s) - envoi vers sortie standard [8815/8815] Extracting in /tmp/tmpID3jKx Now working in /tmp/tmpID3jKx/setuptools-0.9.8 Installing Setuptools running install running bdist_egg running egg_info writing dependency_links to setuptools.egg-info/dependency_links.txt writing requirements to setuptools.egg-info/requires.txt writing setuptools.egg-info/PKG-INFO writing top-level names to setuptools.egg-info/top_level.txt writing entry points to setuptools.egg-info/entry_points.txt reading manifest file 'setuptools.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'setuptools.egg-info/SOURCES.txt' installing library code to build/bdist.linux-i686/egg running install_lib running build_py creating build creating build/lib.linux-i686-2.7 copying pkg_resources.py -> build/lib.linux-i686-2.7 copying easy_install.py -> build/lib.linux-i686-2.7 creating build/lib.linux-i686-2.7/setuptools copying setuptools/py27compat.py -> build/lib.linux-i686-2.7/setuptools copying setuptools/script template.py -> build/lib.linux-i686-2.7/setuptools copying setuptools/py24compat.py -> build/lib.linux-i686-2.7/setuptools copying setuptools/script template (dev).py -> build/lib.linux-i686-2.7/setuptools copying setuptools/site-patch.py -> build/lib.linux-i686-2.7/setuptools copying setuptools/__init__.py -> build/lib.linux-i686-2.7/setuptools copying setuptools/ssl_support.py -> build/lib.linux-i686-2.7/setuptools copying setuptools/archive_util.py -> build/lib.linux-i686-2.7/setuptools copying setuptools/depends.py -> build/lib.linux-i686-2.7/setuptools copying setuptools/package_index.py -> build/lib.linux-i686-2.7/setuptools copying setuptools/sandbox.py -> build/lib.linux-i686-2.7/setuptools copying setuptools/dist.py -> build/lib.linux-i686-2.7/setuptools copying setuptools/extension.py -> build/lib.linux-i686-2.7/setuptools copying setuptools/compat.py -> build/lib.linux-i686-2.7/setuptools creating build/lib.linux-i686-2.7/_markerlib copying _markerlib/markers.py -> build/lib.linux-i686-2.7/_markerlib copying _markerlib/__init__.py -> build/lib.linux-i686-2.7/_markerlib creating build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/upload_docs.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/bdist_egg.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/sdist.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/upload.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/egg_info.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/install_scripts.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/rotate.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/easy_install.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/build_py.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/install_lib.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/bdist_rpm.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/saveopts.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/__init__.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/test.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/alias.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/install.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/build_ext.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/register.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/setopt.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/bdist_wininst.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/develop.py -> build/lib.linux-i686-2.7/setuptools/command copying setuptools/command/install_egg_info.py -> build/lib.linux-i686-2.7/setuptools/command creating build/lib.linux-i686-2.7/setuptools/_backport copying setuptools/_backport/__init__.py -> build/lib.linux-i686-2.7/setuptools/_backport creating build/lib.linux-i686-2.7/setuptools/tests copying setuptools/tests/test_packageindex.py -> build/lib.linux-i686-2.7/setuptools/tests copying setuptools/tests/test_build_ext.py -> build/lib.linux-i686-2.7/setuptools/tests copying setuptools/tests/test_dist_info.py -> build/lib.linux-i686-2.7/setuptools/tests copying setuptools/tests/test_develop.py -> build/lib.linux-i686-2.7/setuptools/tests copying setuptools/tests/test_easy_install.py -> build/lib.linux-i686-2.7/setuptools/tests copying setuptools/tests/test_markerlib.py -> build/lib.linux-i686-2.7/setuptools/tests copying setuptools/tests/doctest.py -> build/lib.linux-i686-2.7/setuptools/tests copying setuptools/tests/test_sandbox.py -> build/lib.linux-i686-2.7/setuptools/tests copying setuptools/tests/__init__.py -> build/lib.linux-i686-2.7/setuptools/tests copying setuptools/tests/py26compat.py -> build/lib.linux-i686-2.7/setuptools/tests copying setuptools/tests/test_egg_info.py -> build/lib.linux-i686-2.7/setuptools/tests copying setuptools/tests/server.py -> build/lib.linux-i686-2.7/setuptools/tests copying setuptools/tests/test_upload_docs.py -> build/lib.linux-i686-2.7/setuptools/tests copying setuptools/tests/test_test.py -> build/lib.linux-i686-2.7/setuptools/tests copying setuptools/tests/test_resources.py -> build/lib.linux-i686-2.7/setuptools/tests copying setuptools/tests/test_bdist_egg.py -> build/lib.linux-i686-2.7/setuptools/tests copying setuptools/tests/test_sdist.py -> build/lib.linux-i686-2.7/setuptools/tests creating build/lib.linux-i686-2.7/setuptools/_backport/hashlib copying setuptools/_backport/hashlib/_sha.py -> build/lib.linux-i686-2.7/setuptools/_backport/hashlib copying setuptools/_backport/hashlib/_sha512.py -> build/lib.linux-i686-2.7/setuptools/_backport/hashlib copying setuptools/_backport/hashlib/__init__.py -> build/lib.linux-i686-2.7/setuptools/_backport/hashlib copying setuptools/_backport/hashlib/_sha256.py -> build/lib.linux-i686-2.7/setuptools/_backport/hashlib creating build/bdist.linux-i686 creating build/bdist.linux-i686/egg copying build/lib.linux-i686-2.7/easy_install.py -> build/bdist.linux-i686/egg creating build/bdist.linux-i686/egg/setuptools copying build/lib.linux-i686-2.7/setuptools/py27compat.py -> build/bdist.linux-i686/egg/setuptools copying build/lib.linux-i686-2.7/setuptools/script template.py -> build/bdist.linux-i686/egg/setuptools copying build/lib.linux-i686-2.7/setuptools/py24compat.py -> build/bdist.linux-i686/egg/setuptools copying build/lib.linux-i686-2.7/setuptools/script template (dev).py -> build/bdist.linux-i686/egg/setuptools copying build/lib.linux-i686-2.7/setuptools/site-patch.py -> build/bdist.linux-i686/egg/setuptools creating build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/upload_docs.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/bdist_egg.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/sdist.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/upload.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/egg_info.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/install_scripts.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/rotate.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/easy_install.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/build_py.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/install_lib.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/bdist_rpm.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/saveopts.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/__init__.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/test.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/alias.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/install.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/build_ext.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/register.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/setopt.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/bdist_wininst.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/develop.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/command/install_egg_info.py -> build/bdist.linux-i686/egg/setuptools/command copying build/lib.linux-i686-2.7/setuptools/__init__.py -> build/bdist.linux-i686/egg/setuptools copying build/lib.linux-i686-2.7/setuptools/ssl_support.py -> build/bdist.linux-i686/egg/setuptools copying build/lib.linux-i686-2.7/setuptools/archive_util.py -> build/bdist.linux-i686/egg/setuptools copying build/lib.linux-i686-2.7/setuptools/depends.py -> build/bdist.linux-i686/egg/setuptools creating build/bdist.linux-i686/egg/setuptools/_backport creating build/bdist.linux-i686/egg/setuptools/_backport/hashlib copying build/lib.linux-i686-2.7/setuptools/_backport/hashlib/_sha.py -> build/bdist.linux-i686/egg/setuptools/_backport/hashlib copying build/lib.linux-i686-2.7/setuptools/_backport/hashlib/_sha512.py -> build/bdist.linux-i686/egg/setuptools/_backport/hashlib copying build/lib.linux-i686-2.7/setuptools/_backport/hashlib/__init__.py -> build/bdist.linux-i686/egg/setuptools/_backport/hashlib copying build/lib.linux-i686-2.7/setuptools/_backport/hashlib/_sha256.py -> build/bdist.linux-i686/egg/setuptools/_backport/hashlib copying build/lib.linux-i686-2.7/setuptools/_backport/__init__.py -> build/bdist.linux-i686/egg/setuptools/_backport copying build/lib.linux-i686-2.7/setuptools/package_index.py -> build/bdist.linux-i686/egg/setuptools copying build/lib.linux-i686-2.7/setuptools/sandbox.py -> build/bdist.linux-i686/egg/setuptools copying build/lib.linux-i686-2.7/setuptools/dist.py -> build/bdist.linux-i686/egg/setuptools copying build/lib.linux-i686-2.7/setuptools/extension.py -> build/bdist.linux-i686/egg/setuptools copying build/lib.linux-i686-2.7/setuptools/compat.py -> build/bdist.linux-i686/egg/setuptools creating build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/setuptools/tests/test_packageindex.py -> build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/setuptools/tests/test_build_ext.py -> build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/setuptools/tests/test_dist_info.py -> build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/setuptools/tests/test_develop.py -> build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/setuptools/tests/test_easy_install.py -> build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/setuptools/tests/test_markerlib.py -> build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/setuptools/tests/doctest.py -> build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/setuptools/tests/test_sandbox.py -> build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/setuptools/tests/__init__.py -> build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/setuptools/tests/py26compat.py -> build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/setuptools/tests/test_egg_info.py -> build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/setuptools/tests/server.py -> build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/setuptools/tests/test_upload_docs.py -> build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/setuptools/tests/test_test.py -> build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/setuptools/tests/test_resources.py -> build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/setuptools/tests/test_bdist_egg.py -> build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/setuptools/tests/test_sdist.py -> build/bdist.linux-i686/egg/setuptools/tests copying build/lib.linux-i686-2.7/pkg_resources.py -> build/bdist.linux-i686/egg creating build/bdist.linux-i686/egg/_markerlib copying build/lib.linux-i686-2.7/_markerlib/markers.py -> build/bdist.linux-i686/egg/_markerlib copying build/lib.linux-i686-2.7/_markerlib/__init__.py -> build/bdist.linux-i686/egg/_markerlib byte-compiling build/bdist.linux-i686/egg/easy_install.py to easy_install.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/py27compat.py to py27compat.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/script template.py to script template.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/py24compat.py to py24compat.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/script template (dev).py to script template (dev).pyc byte-compiling build/bdist.linux-i686/egg/setuptools/site-patch.py to site-patch.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/upload_docs.py to upload_docs.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/bdist_egg.py to bdist_egg.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/sdist.py to sdist.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/upload.py to upload.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/egg_info.py to egg_info.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/install_scripts.py to install_scripts.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/rotate.py to rotate.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/easy_install.py to easy_install.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/build_py.py to build_py.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/install_lib.py to install_lib.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/bdist_rpm.py to bdist_rpm.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/saveopts.py to saveopts.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/__init__.py to __init__.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/test.py to test.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/alias.py to alias.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/install.py to install.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/build_ext.py to build_ext.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/register.py to register.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/setopt.py to setopt.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/bdist_wininst.py to bdist_wininst.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/develop.py to develop.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/command/install_egg_info.py to install_egg_info.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/__init__.py to __init__.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/ssl_support.py to ssl_support.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/archive_util.py to archive_util.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/depends.py to depends.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/_backport/hashlib/_sha.py to _sha.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/_backport/hashlib/_sha512.py to _sha512.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/_backport/hashlib/__init__.py to __init__.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/_backport/hashlib/_sha256.py to _sha256.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/_backport/__init__.py to __init__.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/package_index.py to package_index.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/sandbox.py to sandbox.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/dist.py to dist.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/extension.py to extension.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/compat.py to compat.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/tests/test_packageindex.py to test_packageindex.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/tests/test_build_ext.py to test_build_ext.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/tests/test_dist_info.py to test_dist_info.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/tests/test_develop.py to test_develop.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/tests/test_easy_install.py to test_easy_install.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/tests/test_markerlib.py to test_markerlib.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/tests/doctest.py to doctest.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/tests/test_sandbox.py to test_sandbox.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/tests/__init__.py to __init__.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/tests/py26compat.py to py26compat.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/tests/test_egg_info.py to test_egg_info.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/tests/server.py to server.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/tests/test_upload_docs.py to test_upload_docs.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/tests/test_test.py to test_test.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/tests/test_resources.py to test_resources.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/tests/test_bdist_egg.py to test_bdist_egg.pyc byte-compiling build/bdist.linux-i686/egg/setuptools/tests/test_sdist.py to test_sdist.pyc byte-compiling build/bdist.linux-i686/egg/pkg_resources.py to pkg_resources.pyc byte-compiling build/bdist.linux-i686/egg/_markerlib/markers.py to markers.pyc byte-compiling build/bdist.linux-i686/egg/_markerlib/__init__.py to __init__.pyc creating build/bdist.linux-i686/egg/EGG-INFO copying setuptools.egg-info/PKG-INFO -> build/bdist.linux-i686/egg/EGG-INFO copying setuptools.egg-info/SOURCES.txt -> build/bdist.linux-i686/egg/EGG-INFO copying setuptools.egg-info/dependency_links.txt -> build/bdist.linux-i686/egg/EGG-INFO copying setuptools.egg-info/entry_points.txt -> build/bdist.linux-i686/egg/EGG-INFO copying setuptools.egg-info/entry_points.txt.orig -> build/bdist.linux-i686/egg/EGG-INFO copying setuptools.egg-info/requires.txt -> build/bdist.linux-i686/egg/EGG-INFO copying setuptools.egg-info/top_level.txt -> build/bdist.linux-i686/egg/EGG-INFO copying setuptools.egg-info/zip-safe -> build/bdist.linux-i686/egg/EGG-INFO creating dist creating 'dist/setuptools-0.9.8-py2.7.egg' and adding 'build/bdist.linux-i686/egg' to it removing 'build/bdist.linux-i686/egg' (and everything under it) Processing setuptools-0.9.8-py2.7.egg Copying setuptools-0.9.8-py2.7.egg to /usr/local/lib/python2.7/dist-packages Adding setuptools 0.9.8 to easy-install.pth file Installing easy_install script to /usr/local/bin Installing easy_install-2.7 script to /usr/local/bin Installed /usr/local/lib/python2.7/dist-packages/setuptools-0.9.8-py2.7.egg Processing dependencies for setuptools==0.9.8 Finished processing dependencies for setuptools==0.9.8

Now you can use easy_install command line

Source : https://pypi.python.org/pypi/setuptools/0.9.8#installation-instructions

adapt iframe size to window with only css

css :

div#container {
    position: fixed;
    top: 0px;
    left: 0px;
    bottom: 0px;
    right: 0px;
}
div#container iframe {
    position: absolute;
    top: 0;
    bottom: 0;
    left: 0;
    right: 0;
    height: 100%;
    width: 100%;
}

html :

<div id="container">
    <iframe src="http://your_url"></iframe>
</div>

share folder from ubuntu to windows

We describe here how to share folder in ubuntu within Samba:

  • host : ubuntu

  • host ip : 10.10.3.52

  • Edit “smb.conf”

    sudo vi /etc/samba/smb.conf

  • Add following :

    [myshared] path = /home/witr/myFolder available = yes valid users = witr read only = no guest ok = no browsable = yes public = yes writable = yes

  • Restart samba:

    sudo service smbd restart

  • In windows, mount new drive :
    a. choose drive letter,
    b. select network->ubuntu->myshared or type \10.10.3.52myshared
    c. when request user and password, use one of valid users (here witr)

pastebin.fr

If your synergy application stops and you are in obligation to send not secret note from pc to another.

If you have to send urgent not secret data to your friend who is connected to internet in the other sid of the world.

Think to simple way :
http://pastebin.fr

svn first user commit

print users who commits frst commit for each class in project :

> find -name "*.java" -exec svn log -l1 -r1:HEAD {} ; | grep "|" | awk '{print $3}'

print java class name and user who leaves first commit of that class
(user /path_to_java_file/java_file.java) :

> find -name "*.java" | while read line; do user=$(svn log -l1 -r1:HEAD $line | grep "|" | awk '{print $3}'); echo $user $line; done;

maven javadoc for only specified classes

Generate javadoc site :

> mvn clean javadoc:javadoc

We can specify classes which we want to be included in the javadoc, for that add in build>plugins section following:

      <plugin>
        <groupid>org.apache.maven.plugins</groupid>
        <artifactid>maven-javadoc-plugin</artifactid>
        <version>2.9</version>
        <configuration>
                <sourcefileincludes>
                        <include>DelegationUtils.java</include>
                </sourcefileincludes>
                <sourcepath>${basedir}/src/path_to_package1;${basedir}/src/path_to_package2;</sourcepath>
        </configuration>
      </plugin>

Note that previous configuration is handled by 2.9 version or prior of maven javadoc plugin
then regenarate java doc :

> mvn clean javadoc:javadoc

sonarQube and sonarRunner + mysql

Analyse sonarQube standalone server :

[MYSQL]

First of all prepare your mysql user sonar :

> mysql -uroot -p
    > use mysql; > > > >     > create user sonar identified by 'sonar'; > > > >     > update user set host='%' where user = 'sonar'; > > > >     > create database sonar; > > > >     > grant all privileges on sonar.* to sonar; > > > >     > flush privileges; > > > >     > quit > >

[SOANR QUBE]

  • download sonar-x.x.x.zip

  • unzip sonar-x.x.x.zip :

> unzip sonar-x.x.x.zip -d /path/where/to/extract/
  • declare 

  • edit /extracted/sonar/path/conf/sonar.properties

       1. comment     : sonar.jdbc.url:                            jdbc:h2:tcp://localhost:9092/sonar

       2. uncomment : sonar.jdbc.url:                            jdbc:mysql://127.0.0.1:3306/sonar[…]

[SONAR RUNNER]

  • download sonar-runner-dist-x.x.x.zip

  • unzip sonar-runner-dist-x.x.x.zip :

> unzip sonar-runner-dist-x.x.x.zip -d /path/where/to/extract/
  • add environment variable SONAR_RUNNER_HOME: (ubuntu) edit /etc/environment and add SONAR_RUNNER_HOME=/sonar-runner/home/path/ 

  • add $SONAR_RUNNER_HOME/bin to PATH : (ubuntu) edit /etc/environment and edit PATH

  • go to your project home directory

  • create file sonar-project.properties

  • copy following in sonar-projetc.properties

# required metadata sonar.projectKey=witr:project sonar.projectName=proj sonar.projectVersion=1.0 # optional description sonar.projectDescription=Fake description # path to source directories (required) sonar.sources=src/main # The value of the property must be the key of the language. sonar.language=java # Encoding of the source code sonar.sourceEncoding=UTF-8 # Additional parameters #sonar.my.property=value
  • finally analyse your project :
> sonar-runner

javadoc statistics

Generate javadoc with maven

> mvn clean javadoc:javadoc

Check methods not have been commented (missed javadoc comment)

> mvn clean checkstyle:checkstyle

maven next version

if you are curious and want to know how maven produce release version and next version you can try to debug its behavior by using following code:

import org.apache.maven.shared.release.versions.DefaultVersionInfo;
import org.apache.maven.shared.release.versions.VersionParseException;

public static void main(String []args){
    try {
        String version = "1.2.3-SNAPSHOT";
        DefaultVersionInfo v = new DefaultVersionInfo(version);
        System.out.println("version         : " + version);
        System.out.println("release version : " + v.getReleaseVersionString());
        System.out.println("next verison    : " + v.getNextVersion());
    } catch (VersionParseException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }
}

don’t forget to add maven-release-manager in classpath or in your project pom as dependency

main class in manifest with maven

to declare your main class in the manifest file of your projet with maven. In pom add following (bold):

<build>
    <sourcedirectory>src</sourcedirectory>
    <plugins>
       <plugin>
          <groupid>org.apache.maven.plugins</groupid>
          <artifactid>maven-jar-plugin</artifactid>
          <configuration>
             <archive>
                <manifest>
                   <addclasspath>true</addclasspath>
                   <mainclass>net.witr.Application</mainclass>
                </manifest>
             </archive>
          </configuration>
       </plugin>
    </plugins>
</build>

include all files with maven-jar-plugin

Context : pom which use maven-jar-plugin to package jar.
in the source code ther were some non java files (gif, html, …) which aren’t included in produced jar file when executing maven install..

Only compiled java classes are included, maven-jar-plugin filters resources by default: to disable filter we maust add following (bold gray):

<build>
    <sourcedirectory>src</sourcedirectory>
    <resources>
       <resource>
          <directory>src</directory>
          <filtering>false</filtering>
       </resource>
    </resources>
    <plugins>
       <plugin>
          <groupid>org.apache.maven.plugins</groupid>
          <artifactid>maven-jar-plugin</artifactid>
       </plugin>
    </plugins>
</build>

angularJS by Google

Une autre façon de coder en javascript. AngularJS est non seulement une librairie javascript à intégrer ou un framework javascript à s’en servir mais aussi un langage d’expression à découvrir ici : http://angularjs.org/

JSFIDDLE

Utile pour les développeurs javascript. http://jsfiddle.net/ est une application Saas permettant :- d’éditer une application javascript et voir le résultat instantanément

  • de choisir son framework js : Prototype, mootools, jquery … etc
  • de lier des scripts externes
    et plus d’autres fonctionnalités à découvrir

copier plusieurs fichiers avec des noms contenant des caractères accentués cause un problème d'encodage du nom sur le host destination

Aujourd’hui, nous avons transférer des fichiers pdf depuis un poste windows vers un host linux avec l’outil SSH Secure.
les fichiers avec des noms contenants des caractères accentués ont des points d’intérrogation à la place des caractères acentués. Ceci nous pose problème au niveau du programme qui va les parser avec leurs noms originaux (avec caractères accentués)

Un tar et puis une extraction du tar ne résout pas le problème.

Solution 1 :
créer un partage samba avec une machine du réseau et faire un transfert scp

Solution 2  bête :
créer deux scripts d’encodage et de décodage des noms des fichiers.
a. appliquer le script d’encodage sur le host source avant le transfert
b. transférer
c. appliquer le script de décodage sur le host destination

fichier_encodage.sh:
mv “Nöel Milad” 1.pdf
mv “Knüer Dina.pdf” 2.pdf
… etc

fichier_decodage.sh:
mv 1.pdf ”Nöel Milad”
mv 2.pdf ”Knüer Dina.pdf”
… etc

si la liste est treès longue servez vous de Excel:
colonne A : mv
colonne B : vide
colonne C : copier/coller la liste des nom de fichiers (résultat de la commande dir)
colonne D : vide
colonne E : énumération (taper 1 et + outil de remplissage incrémental automatique d’Excel)

secure jboss7 with ssl (https)

  1. create keystore certificate for jboss user

    keytool -genkey -alias jboss -keyalg RSA

  2. edit jboss configuration file standalone.xml:
    • in interfaces : add new interface “local”

               ......                                     ......    

  • in socket binding : add new socket binding “httpLocal” listening on localhost with http

               ......                ......    

  • remove http connector :

  • add local http connector (for modules having to communicate in http with each others)
  • add new https connector

                                     

in subsystem web we must finally have :

        <subsystem default-virtual-server="default-host" xmlns="urn:jboss:domain:web:1.1" native="false">
            <connector socket-binding="httpLocal" scheme="http" protocol="HTTP/1.1" name="http"></connector>
            <connector socket-binding="https" scheme="https" protocol="HTTP/1.1" name="https" secure="true">
              <ssl password="pwd typed in step 1"></ssl>
            </connector>
            <virtual-server enable-welcome-root="true" name="default-host">
                <alias name="localhost"></alias>
                <alias name="example.com"></alias>
            </virtual-server>
        </subsystem>

exception : "java -jar" java.lang.NoClassDefFoundError

when you execute a jar file with java, you have to know that in the jar file we must declare the main class and th classpath in the manifest file.

extarct MANIFEST.MF

 jar xvf myjar.jar META-INF/MANIFEST.MF
see its content
vi META-INF/MANIFEST.MF
it looks like :

Manifest-Version: 1.0
Ant-Version: Apache Ant 1.5.3
Created-By: 1.4.2_02-b03 (Sun Microsystems Inc.)
Main-Class: xxx.package.MainClassName
Class-Path: ./lib/a.jar  ./lib/b.jar

if you have NoClassDefFoundError when launching the jar, look for the missed jar library in the web, download it and put it in lib directory for example, and finally declare it in the manifest like others (./lib/a.jar ./lib/b.jar ./lib/c.jar). so :

mkdir unzippedjar
unzip myjar -d unzippedjar/
cd unzippedjar
vi META-INF/MANIFEST.MF
edit the manifest file : add missed lib to classpath, and save
zip -r ../myjar.jar *

re-execute your myjar.jar and it must work

==========================
case of delineate which miss xalan-2.7.1.jar as library.

jpg to svg

to convert jpg to svg we can use autotrace or potrace.

autotrace:

sudo apt-get install autotrace
use (convert /tmp/mba.jpg to svg):
autotrace /tmp/mba.jpg -input-format jpg -output-format svg -output-file /tmp/mba.svg

see delineate gui application which use autotrace or potrace

merge several pdf files

================================================= 17/06/2013
merge several pdf files:
under ubuntu install pdftk, then to merge several pdf files type  following command:

pdftk file1.pdf file2.pdf output file_merge.pdf

simple nodejs http server

================================================= 14/06/2013
simple nodejs http server:

  • install nodejs
  • create file server.js with following content:
    var url = require(“url”);

function handleHttpRequest(request, response){
response.writeHead(200, {“Content-Type”: “text/html”});
response.write(“pathname:”+url.parse(request.url).pathname);
response.write(“
“);
response.write(“query:”+url.parse(request.url).query);
response.end();
}

var http = require(“http”);
http.createServer(handleHttpRequest).listen(8888);

  • in command line: node /path/to/server.js
  • go to http://localhost:8888
  • go to http://localhost:8888/sevice?user=nour&age;=42

have your GIT server

=========================================== 31/05/2013
GIT:
Consists of two repository. one on server side and the other in client side (clone of server side repos).
When client commits, modifications were saved to client side repository
To send modifications to server, client must push commits.

  1. install and configure git server
  2. create server git repository
  3. access to remote git repo with eclipse
  4. explore git repository with gitk

  5. install and configure git server:
    sudo apt-get install git-core
    git config –global color.diff auto
    git config –global color.status auto
    git config –global color.branch auto
    git config –global user.name “mabrouk”
    ==> see your config in vim ~/.gitconfig

  6. create server git repository
    cd /home/mba
    mkdir mbaGitRepo
    cd mbaGitRepo
    git init –bare

  7. access to remote git repo with eclipse
    Window->Show View->git Repositories
    click on icon “Clone a Git Repository and add the clone to this view”
    • Host : your host
    • Repository path : ~/mbaGitRepo
    • Conection : Protocol ssh / Port 22
    • user: linux username (mba)
    • password : linux user password
      click Next
      don’t worry of warning : Source Git repository is empty
      click Next
      Directory : Choose where you want clone the remote git repository in your local machine
      click Finish
  8. explore git repository with gitk
    install gitk : sudo apt-get install gitk
    launch gitk  : gitk

exception: Can't connect to X11 window server using ':0' as the value of the DISPLAY variable

=========================================== 30/05/2013
If you have following error: Can’t connect to X11 window server using ‘:0’ as the value of the DISPLAY variable.
you must know that :

  • linux user (launching process requsting Xserver) is not authorized to access Xserver
  • OR DISPLAY variable is bad configured

type xhost to know:

xhost
No protocol specified
No protocol specified
xhost:  unable to open display “???”
==> means you don’t have authorization to any Xserver
==> to authorize access type: > sudo xhost +SI:localuser:youruser

xhost
xhost:  unable to open display “???”
==> means DISPLAY variable is bad configured
==> you must fix DISPLAY variable. type: > export DISPLAY=hostname:D.S

  • hostname is the name of the computer where the X server runs. An omitted hostname means the localhost.
  • D is a sequence number (usually 0). It can be varied if there are multiple displays connected to one computer.
  • S is the screen number. A display can actually have multiple screens. Usually there’s only one screen though where 0 is the default.

look for class name in several jar files

=========================================== 28/05/2013
look for class name in several jar files:
ll path-to-jars/ | awk {‘print “path-to-jars/”$9’} | grep “.jar” | while read thejar; do echo $thejar; jar -tvf $thejar | grep “yourClassName pattern”; done
previous command line will echo each jar file name and then classes that names matches your pattern

list content of several jar files

=========================================== 28/05/2013
list content of several jar files:
ll path-to-jars/ | awk {‘print “path-to-jars”$9’} | grep “.jar” | xargs -n 1 jar -tvf

allow remote debug for jboss7

=========================================== 27/05/2013
allow remote debug for jboss7
(https://community.jboss.org/thread/175385)

  1. Check the standalone.conf (or standalone.conf.bat for Windows OS) and uncomment the following line:
    #JAVA_OPTS=”$JAVA_OPTS -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n”

setup hsqldb with jboss7

jboss7 comes with h2 already installed and setup. For hsqldb, just take h2 setup as example.

Following steps helps to setup hsqldb with jboss7.

A. create new jboss7 module: hsqldb module

  1. first create directory $JBOSS_HOME/modules/org/hsqldb/main

    cd $JBOSS_HOME mkdir -p modules/org/hsqldb/main

  2. download hsqldb and put the hsqldb.jar into created directory main
  3. create into main directory following module.xml file (assuming hsqldb jar is hsqldb-x.x.x.jar)

B. setup hsqldb datasource

  1. add hsql driver by editing $JBOSS_HOME/standalone/configuration/standalone.xml (locate drivers in standalone.xml file)

    ...
  2. finally add hsqldb datasource in $JBOSS_HOME/standalone/configuration/standalone.xml (locate datasources in the file) we consider a file hsqldb database created into /home/hsqldb/data/witrHsqldb

    jdbc:hsqldb:file:/home/hsqldb/data/witrHsqldb hsqldb false false FailingConnectionOnly sa

tar exclude

May 16, 2013

pour exclure un dossier du tar créé:
soit le dossier : /home/fol
qui contient
/home/fol/data1/
et /home/fol/data2/
et /home/fol/tmp1/
et /home/fol/tmp2/

on veut faire un tar de “fol” en excluant les dossier “tmp1” et “tmp2”:
cd /home
tar -czvf fol.tgz fol –exclude ‘fol/tmp1’ –exclude ‘fol/tmp2’

et le tour est joué

svn : get remote changes too

May 6, 2013

by default get local changes (compare local copy to revision from which local copy where checkouted)

to get remote changes too: svn update -rHEAD

oracle: exception : Ora-28001: the password has expired

May 6, 2013

this error tell us that user password has expired. check this with following request:
SELECT USERNAME, PROFILE, ACCOUNT_STATUS, EXPIRY_DATE FROM dba_users WHERE username=’your_oracle_username’;
value of ACCOUNT_STATUS column (expired or expired&locked;)

password policies are defined in oracle PROFILE associated to user (value of PROFILE column). profile properties can be found here:
select * from dba_profiles where profile = ‘profile_name’;

take a look of line : resource_name =’PASSWORD_LIFE_TIME’ and depending on your security policies you can change the value to UNLIMITED to avoid password expiry

jrebel/eclipse/remote jbossAS server step by step

April 19, 2013

install jrebel with eclipse:

  • get licence from jrebel web site
  • install jrebel eclipse plugin. Restart eclipse and activate you licence
  • in jrebel config center : select one projet in projects panel, and enter in corresponding “deployment URL” field your jboss7 server url (example http://myserver:8080)
       you will have “Server responded with an error: null”, don’t worry, this because server was not yet configured to respond to remote requests
  • now compile and package your project and then put it in your server jboss.

install jrebel under jboss7

  • copy “C:UsersYOURHOME.jrebeljrebel.lic” et “C:UsersYOURHOME.jrebeljrebel.properties” to the home of your server user launching jboss : /home/user_launchig_jboss/.jrebel/
  • get jrebel.jar from jrebel web site and put it in your host machine
  • add following in the script launching jboss7 before call standalone.sh :
    export JAVA_OPTS=”$JAVA_OPTS -javaagent:/path/to/jrebel.jar -Drebel.log.file=$JBOSS_HOME/standalone/log/jrebel.log -Drebel.remoting_plugin=true”
  • start server jboss7

    if you have following error : Server responded with an error: null ==> this because remoting is not enabled in jrebel java agent : add  -Drebel.remoting_plugin=true in java_opts
    if you have following error : Server responded with an error: Remoting module’navco-module-delegal-postes-pm’ was not found ==> redeploy your project in jboss7 (jrebel remoting needs the rebel-remote.xml)

be notified of all svn changes

If you want like me have been notified of all svn changes, the following script I wrote for you.

Prerequisite :

  • svn installed

  • svn well configured for user executing script

#!/bin/bash export SVN_UPDATES_HOME=/mnt/scripts/svnUpdates export SVN_UPDATES_LAST=$SVN_UPDATES_HOME/tmp/svnUpdatesLast.txt export SVN_UPDATES_TMP=$SVN_UPDATES_HOME/tmp/svnUpdates.txt export SVN_UPDATES_DIFF_TMP=$SVN_UPDATES_HOME/tmp/svnUpdatesDiff.txt export SVN_UPDATES_DIFF_LOG=$SVN_UPDATES_HOME/log/svnUpdatesLog if [[ -s $SVN_UPDATES_LAST ]] then   echo "copy last svn log to tmp"   cp $SVN_UPDATES_LAST $SVN_UPDATES_TMP else   touch $SVN_UPDATES_TMP fi for log in 1 2 3 do   echo "svn log - try:$log"   svn log -v -l5 -rHEAD:1 https://svn.witr.net > $SVN_UPDATES_LAST   if [[ -s $SVN_UPDATES_LAST ]]   then     echo "svn logged successfully in try:$log"     break   fi done if [[ -s $SVN_UPDATES_LAST ]] then   echo "proceed to diff last with tmp"   diff $SVN_UPDATES_LAST $SVN_UPDATES_TMP | grep -E "^<" | sed s/^.// > $SVN_UPDATES_DIFF_TMP   diffsCount=$(wc -l $SVN_UPDATES_DIFF_TMP | awk {'print $1'})   if [ $diffsCount -gt 0 ]   then       "diffs found: send notification"       datetime=$(date "+%Y%m%d-%H%M%S")       logFile=$SVN_UPDATES_DIFF_LOG$datetime       cp $SVN_UPDATES_DIFF_TMP $logFile       ### put here the way you want be notified (mail, jabber, ...)       ### echo "==== svn updated cf. $logFile ====" | sendxmpp mabrouk@openfire.witr.net       ### cat $logFile | sendxmpp mabrouk@openfire.witr.net   fi fi

Here I choose to be notified by message sent from jabber server from my account to my account.
Prerequisite : sendxmpp installed : apt-get install sendxmpp

echo "==== svn updated cf. $logFile ====" | sendxmpp mabrouk@openfire.witr.net cat $logFile | sendxmpp mabrouk@openfire.witr.net

Finally, schedule the script in your crontab (executed each minute recommanded)

* * * * * /mnt/scripts/svnUpdates/svnUpdates.sh

mount cifs windows drive under ubuntu

April 18, 2013

monter un drive cifs windows:

mount -t cifs -o rw,username=mba,password=xxxxxx //drive/users /drive/U
j’ai eu ce message d’erreur
mount : mauvais type de système de fichiers, option erronée, superbloc
        erroné sur //drive/users, page de code ou aide manquante, ou autre erreur
       (pour plusieurs syst. de fichiers (nfs, cifs) vous pouvez avoir
       besoin d’un programme /sbin/mount. intermédiaire)
       Dans quelques cas certaines informations sont utiles dans syslog - essayez
       dmesg | tail  ou quelque chose du genre

j’ai regardé dans dmesg

dmesg | tail
j’ai eu:
[75756.500633] CIFS VFS: cifs_mount failed w/return code = -22

il suffit d’installer cifs-utils pour que le problème soit réglé

sudo apt-get install cifs-utils

apache2+php

April 12, 2013

apache2+php

sudo apt-get install apache2
sudo apt-get install php5
sudo apt-get install libapache2-mod-php5
sudo /etc/init.d/apache2 restart
vi /etc/apache2/sites-available/default : modifier /var/www by your prefered folder

move /usr to have more disk space

April 11, 2013

soit votre vm “ubuntu”

  • sudo gparted
  • deux possibilités:
      1- créer une partiotion ext4 s’il y a un espace non partitionné sur le disque
      2- démonter la partition qu’on va redimensionner + redimensionner + créer sur la nouvelle partie non partitionné une partition ext4
  • sudo mkdir /media/myusr
  • sudo mount /dev/sdax /mnt/myusr
  • sudo blkid
    …………
    /dev/sdax: UUID=”xxxx-xxx-xxx”
    …………..
  • ajouter cette ligne dans /etc/fstab :
    UUID=xxxx-xxx-xxx /media/myusr ext4 errors=remount-ro  0     1
  • sudo cp -a /usr/. /media/myusr/

arrêter ubuntu.
crééer une nouvelle vm “tmp” avec un nouveau disk virtuel + y attacher le disk de la vm ubuntu
et démarrer la nouvelle vm tmp

  • sudo blkid
    …………
    /dev/sdax: UUID=”yyyy-yyy-yyy”
    …………..
    /dev/sdbx: UUID=”xxxx-xxx-xxx”
    …………..
  • mkdir /dev/sdbx /media/sdbroot
    où sdbx est la partition boot de ubuntu qui contient /usr
  • sudo mount /dev/sbdx /media/sdbroot
  • sudo rm -rf /media/sdbroot/usr/*
  • vi /media/sdbroot/etc/fstab
  • modifier cette ligne dans /etc/fstab :
    UUID=xxxx-xxx-xxx /media/myusr ext4 errors=remount-ro  0     1
    remplacer /media/myusr par /usr
    UUID=xxxx-xxx-xxx /usr ext4 errors=remount-ro  0     1
  • sudo umount /dev/sbdx
  • arrêter la vm tmp + supprimer là si vous voules (on en a plus besoin)
  • démarrer la vm ubuntu et le /usr est bine déplacé sur la nouvelle partition !

sendxmpp : send jabber message with command line

April 8, 2013

sendxmpp : send jabber message with command line:
install sendxmpp (https://www.ebower.com/docs/ubuntu-scripted-gtalk/) : > sudo apt-get install sendxmpp
create user config file : > touch ~/.sendxmpprc
secure your configuration file :  > chmod 600 ~/.sendxmpprc
configure jabber account : > echo ‘myusername@gmail.com;talk.google.com mypassword’ > ~/.sendxmpprc
send your first message : > exho “salut” | sendxmpp yourFriend@server

have your own cloud storage with : ownCloud

I describe here how to install and confugure your own cloud with owncloud.org

assuming we have apache and mysql intalled

> wget http://download.owncloud.org/community/owncloud-5.0.13.tar.bz2 > tar -xjf owncloud-5.0.13.tar.bz2 -C [/path/to/apache]/www/ > cd /path/to/apache/www/owncloud

and suppose that apache user is www-data and its group is www-data

> chown -R www-data:www-data [/path/to/your/owncloud]/config > chown -R www-data:www-data [/path/to/your/owncloud]/apps > mkdir [/path/to/your/owncloud]/data > chown -R www-data:www-data [/path/to/your/owncloud]/data

edit /etc/apache2/sites-enabled/000-default and set AllowOverride to All in the Directory /path/to/apache/www/

> a2enmod rewrite > a2enmod headers > service apache2 restart

open browser localhost/owncloud and see if some php modules are missed

  • PHP-GD module is required
> sudo apt-get install php5-gd
  • xml parser is required
> apt-get install php-xml-parser
  • no database driver is installed (PHP modules have been installed, but they are still listed as missing)
> apt-get install php5-mysql (or any other database driver)

restart apache

> service apache2 restart

and got to http://localhost/owncloud : step require database connection parameters
create empty mysql database for owncloud

> mysql -uroot -p mysql> use mysql; mysql> create user sonar identified by 'sonar'; mysql> update user set host='%' where user = 'sonar'; mysql> create database sonar; mysql> grant all privileges on sonar.* to sonar; mysql> flush privileges; mysql> quit

provide database username, password, and dbname to owncloud assistant in http://localhost/owncloud. And let owncloud initializes its database.

now, you have your own cloud !

x11 window server and users rights

March 27, 2013

if you get exception similar to the following: Can’t connect to X11 window server

check your DISPLAY env variable : > echo $DISPLAY
fix it if not correct : > export DISPLAY=:0

if it’s correctly set but you have the problem. May your linux user you have using don’t have rights to the xhost.
check this closure by typing: > xhost

if your user is not in the list, give it the right to connect to the X11 window server by typing: > xhost +SI:localuser:jboss

icefaces and SendUpdates optimization

if you encounter slow rendering (on icefaces tree for example) and generally when you code using icefaces framework, be sure that any html component you write has an id or has an icefaces component as parent.
In fact, icefaces client module send ajax request to the icefaces server module in aim to retrieve DOM changes. icefaces server module try to look for dom changes, and when each element has changed but have no id, it looks for its parent in the dom and insert all children of this parent as changed.

In my case i have a page with two elements: a <div>…</div> and just after an ...
Unfortenatly, i don’t put an id to my <div>. So when i expand one node in the i got all tree nodes sent as updates elements in the icefaces ajax response. setting an id or converting your <div> to fix problem.

@see

  • method computing and sending updates: com.icesoft.faces.context.PushModeSerializer -> serialize(final Document document)
  • to debug dom updates, add following in your web.xml:

      <context-param>           <param-name>com.icesoft.faces.debugDOMUpdate</param-name>           <param-value>true</param-value>       </context-param>
    

graphml de ton projet java

March 22, 2013

utils:

  • degraph (https://github.com/schauder/degraph) / download: http://schauder.github.com/degraph//download/degraph-0.0.3.zip
  • yEd (http://www.yworks.com/en/downloads.html#yEd) / download: http://www.yworks.com/en/products_download.php?file=yEd-3.10.2.zip

howto:

  1. download degraph-0.0.3.zip and yEd-3.10.2.zip
  2. unzip them in corresponding directories: degraph-0.0.3, yEd-3.10.2
  3. ./degraph-0.0.3/bin/degraph -c path/to/my_jar.jar -o /tmp/output.graphml -i witr.**
        this command line will scan only witr.** classes of your my_jar.jar and create output.graphml file

  4. cd yEd-3.10.2
        > java -jar yEd-3.10.2/yed.jar
        this wll run yEd. be sure that you can connect to X11 window server

  5. use yEd gui to open output.graphml file
  6. if the graph is not well presented then menu: layout>organic>
        preferred edge length = 117
        minimal node distance = 50
        avoid node/edge overlaps : check
        compacteness = 0.9
        ===> validate by pressing ok

enjoy

youtrack and intellijidea

March 21, 2013

http://www.jetbrains.com/youtrack/documentation/linux_installation.html

simple as you spell a

highlight results with javascript and escaping contents of tag

March 21, 2013

substitute regexp by this one var search_regexp = new RegExp(‘(?!]?>)>([^<])?(‘+search.trim()+’)([^>])?(?![^<]?)<’,’ig’);

function highlightWordSearching(componentId, search){
    if(search && !search.blank()){
    search = search.replace(‘&’,’&’);
        var search_regexp = new RegExp(‘>([^<])?(‘+search.trim()+’)([^>])?<’,’ig’);
        $(componentId).innerHTML = $(componentId).innerHTML.replace(search_regexp,’>$1$2$3<’);
    }
}

connect with ssh from windows SSH Secure Shell to ubuntu sshd

March 20, 2013

first of all install sshd : “sudo apt-get install openssh-server”
then start sshd : “sudo /etc/init.d/ssh start” and try to connect from SSH Secure Shell with username/password

If it works, we have to know that username/password authentication is not secure!
So we will disable authentication with username/password and allow only keys authentication.
We have to edit sshd config file: sudo vi /etc/ssh/ssh_config
ensure that lines bellow are not commented and have such values:

  • RSAAuthentication yes
  • PubkeyAuthentication yes
  • AuthorizedKeysFile      %h/.ssh/authorized_keys
    and disable username/password auth:
  • PasswordAuthentication no
    save sshd_config and restart sshd : sudo /etc/init.d/ssh restart

now if you try to connect from SSH Secure Shell with username/password you will be refused. you must generate and put public and private keys.

ssh-keygen -t dsa mykey.ossh
will create private and public keys

cat mykey.ossh.pub » ~/.ssh/authorized_keys
will authorize person having keys to login

if you will connect from open ssh client you just have to copy keys in your .ssh home directory
if you will connect from SSH Secure Shell you must before that convert open ssh keys to ssh2

ssh-keygen -e -f mykey.ossh > mykey
ssh-keygen -e -f mykey.ossh.pub > mykey.pub
then use mykey and mykey.pub from SSH SecureShell

connect with specified private key

ssh -i path/to/your/private_key user@server.domain_or_ip
transfer myfile to server
scp -i path/to/your/private_key myfile user@server.domain_or_ip:/path_in_server/

wireshark and npf under windows7

March 19, 2013

wireshark can not start capture because npf is not running under windows7.
solution : execute cmd.exe as administrator and type:

sc qc npf
this will display th state of npf. if it’s not running then type
sc start npf
if you want to set auto start type
sc config npf start=auto

xmllint

March 5, 2013

line command ubuntu to beautify xml file

ubuntu numpad HS

February 12, 2013

ubuntu numpad HS
ça m’est arrivé plusieurs fois. le numpad ne marche plus sous ubuntu. solution : shift+verrNum. sinon si ça marche toujours pas: settings–>clavier–> réinitialiser l’agencement du clavier

vbox resize partition

January 16, 2013

1- create new vdi with wanted size
2- copy old vdi in new (dos command): VBoxManage.exe clonehd c:…old.vdi c:…new.vdi –existing
3- link new vdi to you vm and restart your vm
4- in your vm terminal transform free space on partition with gparted utils : sudo gparted
5- if created partition is /dev/sda3, then mount it in any folder you want. mnt for example : sudo mount /dev/sda3 /mnt

Meld sous ubuntu

January 11, 2013

Meld sous ubuntu pour comparer les fichiers. Les dossiers aussi. sudo apt-get install meld

icefaces : refresh whole page with code

     public void refreshPage() {
        FacesContext context = FacesContext.getCurrentInstance();
        Application application = context.getApplication();
        ViewHandler viewHandler = application.getViewHandler();
        UIViewRoot viewRoot = viewHandler.createView(context, context.getViewRoot().getViewId());
        context.setViewRoot(viewRoot);
    }

icefaces : force refresh a portion of page

    public void refreshUI(String componentId) {
        UIComponent uiComp = BaseBean.findComponentInRoot(componentId);
        if (uiComp != null) {
            uiComp.getChildren().clear();
        }
    }

    public static UIComponent findComponentInRoot(String id) {
        UIComponent component = null;

        FacesContext facesContext = FacesContext.getCurrentInstance();
        if (facesContext != null) {
            UIComponent root = facesContext.getViewRoot();
            component = findComponent(root, id);
        }

        return component;
    }

    public static UIComponent findComponent(UIComponent base, String id) {
        if (id.equals(base.getId())) {
            return base;
        }

        UIComponent kid = null;
        UIComponent result = null;
        Iterator<uicomponent> kids = base.getFacetsAndChildren();
        while (kids.hasNext() && (result == null)) {
            kid = kids.next();
            if (id.equals(kid.getId())) {
                result = kid;
                break;
            }
            result = findComponent(kid, id);
            if (result != null) {
                break;
            }
        }
        return result;
    }

exception : maven release error : Could not read chunk Size: secure connection truncated

October 11, 2012

Could not read chunk Size: secure connection truncated
http://adventuresindotnet.blogspot.fr/2010/09/svn-trouble.html

  • svn server 1.6.11
  • eclipse juno + svnkit 1.3.8
  • apache maven 2.2.1
    quand je fait une maven release tout va bien jusqu’au tag de la version. mais après quand il essaye de récupérer le tag avec un “svn checkout” ça montre qu’il récupère les fichiers du svn… mais après un certain temps ça plante avec un message d’erreur svn (could not read chunk size: secure connection truncated). côté serveur apache httpd (message d’erreur : comme dans l’article)
    même problème que dans l’article. j’ai augmenté le timeout sans que cela résout le pb. Solution: changer le client svn : j’ai svn-win32-1.5.6 .j’ai essayé 1.6.6 / 1.6.11 / 1.6.15  (téléchargeables ici http://sourceforge.net/projects/win32svn/files/) et essayer dans une ligne de commande dos “svn checkout –username xxx –password xxx https://svn.xxx.fr/project/tags/xxx” en vain.
    Avec la dernière version du svn client 1.7.6, le svn checkout ça marche. Mais quand j’essaye avec maven release-prepare => erreur : working copy is too old (c’est normal la copie dans svn est format 10 créé avec un svn server 1.6.11) et le client est plus récent 1.7.6.
    enfin j’ai essayé un svn checkout avec le client svn-win32-1.6.19 ça marche. bingo

statsvn.org

June 21, 2012

shell: svn log -v –xml > logFile.log
shell: java -jar statsvn.jar logFile.log
see more options
details for ant task in my Drive

requête qui cherche dans tous les composants apex (à améliorer constemment)

May 2, 2012

–accept search_text prompt “Enter search text: “

select application_id, page_id, 'Region' objtype, region_name obj_name, region_source source from   apex_application_page_regions where lower(region_source) like lower('%&search;_text%') UNION ALL select application_id, page_id, 'Region (condition)' obj_type, region_name obj_name, to_clob('--EXPR1:'||CHR(13)||CHR(10)||condition_expression1||CHR(13)||CHR(10)||CHR(13)||CHR(10)||'--EXPR2:'||CHR(13)||CHR(10) || condition_expression2) source from   apex_application_page_regions where  lower(condition_expression1 || ' ' || condition_expression2) like  lower('%&search;_text%') UNION ALL select application_id, page_id, 'Item' obj_type, item_name obj_name, to_clob(item_source) source from   apex_application_page_items where  lower(item_source) like  lower('%&search;_text%') UNION ALL select application_id, page_id, 'Item (condition)' obj_type, item_name obj_name, to_clob('--EXPR1:'||CHR(13)||CHR(10)||condition_expression1||CHR(13)||CHR(10)||CHR(13)||CHR(10)||'--EXPR2:'||CHR(13)||CHR(10) || condition_expression2) source from   apex_application_page_items where  lower(condition_expression1 || ' ' || condition_expression2) like  lower('%&search;_text%') UNION ALL select application_id, page_id, 'Item (default value)' obj_type, item_name obj_name, to_clob(item_default) source from   apex_application_page_items where  lower(item_default) like  lower('%&search;_text%') UNION ALL select application_id, page_id, 'Item (lov)' obj_type, item_name obj_name, to_clob(lov_definition) source from   apex_application_page_items where  lower(lov_definition) like  lower('%&search;_text%') UNION ALL select application_id, page_id, 'Item (post computation)' obj_type, item_name obj_name, to_clob(source_post_computation) source from   apex_application_page_items where  lower(source_post_computation) like  lower('%&search;_text%') UNION ALL select application_id, page_id, 'Item (readOnly condition)' obj_type, item_name obj_name, to_clob(read_only_condition_exp1) source from   apex_application_page_items where  lower(read_only_condition_exp1) like  lower('%&search;_text%') UNION ALL select application_id, page_id, 'Process' obj_type, process_name obj_name, process_source source from   apex_application_page_proc where  lower(process_source) like  lower('%&search;_text%') UNION ALL select application_id, page_id, 'Process (condition)' obj_type, process_name obj_name, to_clob('--EXPR1:'||CHR(13)||CHR(10)||condition_expression1||CHR(13)||CHR(10)||CHR(13)||CHR(10)||'--EXPR2:'||CHR(13)||CHR(10) || condition_expression2) source from   apex_application_page_proc where  lower(condition_expression1 || ' ' || condition_expression2) like  lower('%&search;_text%') UNION ALL select application_id, page_id, 'Calcul' obj_type, execution_sequence||' '||item_name obj_name, to_clob(computation) source from   apex_application_page_comp where  lower(computation) like  lower('%&search;_text%') UNION ALL select application_id, page_id, 'Calcul (condition)' obj_type, execution_sequence||' '||item_name obj_name, to_clob('--EXPR1:'||CHR(13)||CHR(10)||condition_expression1||CHR(13)||CHR(10)||CHR(13)||CHR(10)||'--EXPR2:'||CHR(13)||CHR(10) || condition_expression2) source from   apex_application_page_comp where  lower(condition_expression1 || ' ' || condition_expression2) like  lower('%&search;_text%') UNION ALL select application_id, page_id, 'Branch' obj_type, TO_CHAR(process_sequence) obj_name, to_clob(branch_action) source from   apex_application_page_branches where  lower(branch_action) like  lower('%&search;_text%') UNION ALL select application_id, page_id, 'Branch (condition)' obj_type, to_char(process_sequence) obj_name, to_clob('--EXPR1:'||CHR(13)||CHR(10)||condition_expression1||CHR(13)||CHR(10)||CHR(13)||CHR(10)||'--EXPR2:'||CHR(13)||CHR(10) || condition_expression2) source from   apex_application_page_branches where  lower(condition_expression1 || ' ' || condition_expression2) like  lower('%&search;_text%') UNION ALL select application_id, page_id, 'Button (condition)' obj_type, BUTTON_SEQUENCE || ' ' || BUTTON_NAME obj_name, to_clob('--EXPR1:'||CHR(13)||CHR(10)||condition_expression1||CHR(13)||CHR(10)||CHR(13)||CHR(10)||'--EXPR2:'||CHR(13)||CHR(10) || condition_expression2) source from   apex_application_page_buttons where  lower(condition_expression1 || ' ' || condition_expression2) like  lower('%&search;_text%') UNION ALL select application_id, page_id, 'Validation' obj_type, VALIDATION_NAME obj_name, to_clob('--EXPR1:'||CHR(13)||CHR(10)||validation_expression1||CHR(13)||CHR(10)||CHR(13)||CHR(10)||'--EXPR2:'||CHR(13)||CHR(10) || validation_expression2) source from   apex_application_page_val where  lower(validation_expression1 || ' ' || validation_expression2) like  lower('%&search;_text%') UNION ALL select application_id, page_id, 'Validation (condition)' obj_type, VALIDATION_NAME obj_name, to_clob('--EXPR1:'||CHR(13)||CHR(10)||condition_expression1||CHR(13)||CHR(10)||CHR(13)||CHR(10)||'--EXPR2:'||CHR(13)||CHR(10) || condition_expression2) source from   apex_application_page_val where  lower(condition_expression1 || ' ' || condition_expression2) like  lower('%&search;_text%') UNION ALL select ap.application_id application_id, -1 page_id, 'application_process' obj_type, to_char(ap.PROCESS_NAME) obj_name, process source   from apex_application_processes ap where  lower(ap.process) like  lower('%&search;_text%') UNION ALL select aa.application_id application_id, -1 page_id, 'authorization model' obj_type, to_char(aa.authorization_scheme_name) obj_name, to_clob(aa.scheme) source   from apex_application_authorization aa where  lower(aa.scheme) like  lower('%&search;_text%') --------------------------------------------------------------------- --------------------------------------------------------------------- ---- replace authorization model request by the following if APEX 4 /* UNION ALL select aa.application_id application_id, -1 page_id, 'authorization model' obj_type, to_char(aa.authorization_scheme_name) obj_name,        to_clob(aa.attribute_01 || ' || ' || aa.attribute_02 || ' || ' ||aa.attribute_03 || ' || ' ||aa.attribute_04 || ' || ' ||aa.attribute_05 || ' || ' ||                aa.attribute_06 || ' || ' || aa.attribute_07 || ' || ' ||aa.attribute_08 || ' || ' ||aa.attribute_09 || ' || ' ||aa.attribute_10 || ' || ' ||                aa.attribute_11 || ' || ' || aa.attribute_12 || ' || ' ||aa.attribute_13 || ' || ' ||aa.attribute_14 || ' || ' ||aa.attribute_15        ) source   from apex_application_authorization aa where  lower(aa.attribute_01) like  lower('%&search;_text%') OR        lower(aa.attribute_02) like  lower('%&search;_text%') OR        lower(aa.attribute_03) like  lower('%&search;_text%') OR        lower(aa.attribute_04) like  lower('%&search;_text%') OR        lower(aa.attribute_05) like  lower('%&search;_text%') OR        lower(aa.attribute_06) like  lower('%&search;_text%') OR        lower(aa.attribute_07) like  lower('%&search;_text%') OR        lower(aa.attribute_08) like  lower('%&search;_text%') OR        lower(aa.attribute_09) like  lower('%&search;_text%') OR        lower(aa.attribute_10) like  lower('%&search;_text%') OR        lower(aa.attribute_11) like  lower('%&search;_text%') OR        lower(aa.attribute_12) like  lower('%&search;_text%') OR        lower(aa.attribute_13) like  lower('%&search;_text%') OR        lower(aa.attribute_14) like  lower('%&search;_text%') OR        lower(aa.attribute_15) like  lower('%&search;_text%') > > > > */ UNION ALL select alov.application_id application_id, -1 page_id, 'lov' obj_type, to_char(alov.list_of_values_name) obj_name, to_clob(alov.list_of_values_query) source   from apex_application_lovs alov where  lower(alov.list_of_values_query) like  lower('%&search;_text%') ORDER BY 1,2,3,4

exit imp prompt (IMP-0002 error)

You are trying to import oracle dump file with imp command, but imp don’t have access to that dump file. You will be prompted to enter a new dump file path name.

IMP-00002: failed to open expdat.dmp for read Import file: expdat.dmp >

use CTRL-D to exit imp prompt

import oracle dump file

import dump file

imp witr/passwd@WITRDBIDENT file=/tmp/expdat.dmp fromuser=witr touser=witr log =/tmp/imp.log

import one table from dump file

imp witr/passwd@WITRDBIDENT file=/tmp/expdat.dmp fromuser=witr touser=witr tables=T_WITR log =/tmp/imp.log

oublier le mot de passe root mysql ubuntu

March 29, 2012

oublier le mot de passe root mysql ubuntu:

1) sudo service mysql stop
2) sudo mysqld_safe –skip-grant-tables
3) dans un autre terminal : mysql -u root mysql
4) update user set Password=PASSWORD(‘newPassword’) where user=’root’;
4) flush privileges;
5) exit;
6) c’est tout source: http://www.howtoforge.com/reset-forgotten-mysql-root-password

svn create repos consigne

March 28, 2012

ne pas oublier de changer les droits après la création de repository svn avec svnadmin:
#chown -R www-data:www-data /var/svn/*
#chmod -R 770 /var/svn/*

imprimer sur une imprimante Xerox 5230 sur réseau via ubuntu

March 28, 2012

imprimer sur une imprimante Xerox 5230 sur réseau via ubuntu :

1) vi /etc/cups/ppd/.ppd
2) change the line : cupsFilter: “application/vnd.cups-postscript 0 /Library/Printers/Xerox/filter/XeroxPSFilter” to *%cupsFilter: “application/vnd.cups-postscript 0 /Library/Printers/Xerox/filter/XeroxPSFilter”
3) sudo service cups restart ==> ubuntuforums.org/showthread.php?t=1576382

logs Empathy

February 6, 2012

logs Empathy: ~/.local/share/TpLogger/logs

consigne apex

February 3, 2012

ne jamais changer le nom d’un champ APEX. plutôt le supprimer et créer un nouveau. les calculs qui y
dépondent ne marche plus correctement même si on met à jour le nom du champ dans le plsql du calcul

highlight results with javascript

September 20, 2011

highlight results with javascript (prototype required)

function highlightWordSearching(componentId, search){
    if(search && !search.blank()){
    search = search.replace(‘&’,’&’);
        var search_regexp = new RegExp(‘>([^<])?(‘+search.trim()+’)([^>])?<’,’ig’);
        $(componentId).innerHTML = $(componentId).innerHTML.replace(search_regexp,’>$1$2$3<’);
    }
}

OneToMany association : hql query returns no results

Problem: hql query retruns null or empty result while sql corresponding query returns some resultsets. If the target hibernate Java Bean have an EmbeddedId be sure that the AttributeOverrides didn’t include a nullable column attribute. If so remove these nullable columns attributes from embedded id and put them directly in the bean as Bean variable properties. And this will fix problem

Wrong Id definition

@Entity
@Table(name="WITR_MARIAGE", uniqueConstraints = @UniqueConstraint(columnNames={"CUSTOMER_ID", "WITR_HUSBAND_ID", "WITR_WIFE_ID"}) )
public class WitrMariage  implements java.io.Serializable {

    private WitrMariageId id;
    private WitrHusband witrHusband;
    private WitrWife witrWife;

    @EmbeddedId
    @AttributeOverrides( {
            @AttributeOverride(name="customerId", column=@Column(name="CUSTOMER_ID", nullable=false, precision=22, scale=0) ),
            @AttributeOverride(name="witrHusbandId", column=@Column(name="WITR_HUSBAND_ID", nullable=false, precision=22, scale=0) ),
            @AttributeOverride(name="witrWifeId", column=@Column(name="WITR_WIFE_ID", nullable=false, precision=22, scale=0) ),
            @AttributeOverride(name="sort", column=@Column(name="SORT", precision=22, scale=0) ) } )
    public WitrMariageId getId() {
        return this.id;
    }

    public void setId(WitrMariageId id) {
        this.id = id;
    }

    /*
     *   WitrHusband getter & setter
     *   WitrWife getter & setter
     */

}

Right Id definition

@Entity
@Table(name="WITR_MARIAGE", uniqueConstraints = @UniqueConstraint(columnNames={"CUSTOMER_ID", "WITR_HUSBAND_ID", "WITR_WIFE_ID"}) )
public class WitrMariage  implements java.io.Serializable {

    private WitrMariageId id;
    private WitrHusband witrHusband;
    private WitrWife witrWife;
    private long sort;

    @EmbeddedId
    @AttributeOverrides( {
            @AttributeOverride(name="customerId", column=@Column(name="CUSTOMER_ID", nullable=false, precision=22, scale=0) ),
            @AttributeOverride(name="witrHusbandId", column=@Column(name="WITR_HUSBAND_ID", nullable=false, precision=22, scale=0) ),
            @AttributeOverride(name="witrWifeId", column=@Column(name="WITR_WIFE_ID", nullable=false, precision=22, scale=0) ) } )
    public WitrMariageId getId() {
        return this.id;
    }

    public void setId(WitrMariageId id) {
        this.id = id;
    }

    /*
     *   WitrHusband getter & setter
     *   WitrWife getter & setter
     *   sort getter & setter
     */

}

start and stop oracle DB sql traces with shell scripts

startSqlTrace.sh : start sql traces

#!/bin/bash
echo password:
read SYSPASS
sqlplus system/$SYSPASS@WITRDB << FIN
alter system set sql_trace=true;
exit
FIN

stopSqlTrace.sh : stop sql traces

#!/bin/bash
echo password:
read SYSPASS
sqlplus system/$SYSPASS@WITRDB << FIN
alter system set sql_trace=false;
exit
FIN

find out oracle DB service name or SID

If I have system access to oracle DB, and want to find out the service name

> sqlplus system password: XXXX SQL> select sys_context('userenv','instance_name') from dual;

output

SYS_CONTEXT('USERENV','INSTANCE_NAME') -------------------------------------------------- MY_INSTANCE_NAME

Now, to find out SID

SQL> select sys_context('userenv','sid') from dual;

oracle: use tkprof

================================================= 19/06/2011
use tkprof

  1. Login with oracle user
  2. Sqlplus system/xxxx@SHEMA
  3. alter system set sql_trace=true;
  4. show parameter user_dump_dest;
  5. Cd
  6. tkprof xxxxx.trc /tmp/myAnalyse.out explain=user/pass@SHEMA sort=execpu
  7. ne pas oublier : alter system set sql_trace=false;

connect by

We have following data table category:

category_id parent_id name sort
10 null véhicule motorisé
11 10 quatre roues 3
12 10 sans roues 1
13 10 deux roues 2

The recursive way to query childhood relation ship data :

select level, lpad(' ',level*5,' ') || t.name from categry t start with t.parent_id is null connect by t.parent_id = prior t.category_id

Result of query: <table ><tbody >

1 véhicule motorisé 2      quatre roues 2      sans roues 2      deux roues

</tbody></table>

Now we have to sort children with value of column sort

select level, lpad(' ',level*5,' ') || t.name from categry t start with t.parent_id is null connect by t.parent_id = prior t.category_id order siblings by t.sort

Result of query: <table ><tbody >

1 véhicule motorisé 2      sans roues 2      deux roues 2      quatre roues

</tbody></table>

Apex dynamic checkboxes

A way to display and handle checkboxes apex items by code

Display check boxes from select request : Create Region with PL/SQL Type and with following source

declare 

    cursor chk_cur is SELECT 
        id, label
    FROM t_witr
    ORDER BY label;

    ind integer := 1;
    l_chk_selected_ids varchar2(3000) := null;
    l_check_state varchar2(30) := 'CHECKED';

begin

    l_chk_selected_ids := :P1_CHK_SELECTED_IDS;

    htp.prn('<h2>Items selection</h2>');
    htp.prn('<table style="width:100%">');
    for col in chk_cur loop
        if ind mod 2 > 0 then 
            htp.prn('<tr><td>');
        else
            htp.prn('<td>');
        end if;
        if l_chk_selected_ids is not null and length(trim(l_chk_selected_ids)) > 0 and instr(l_chk_selected_ids,'#!' || col.id || '#!') <= 0 then
          l_check_state := 'UNCHECKED';
        else
          l_check_state := 'CHECKED';
        end if;
        htp.prn(apex_item.checkbox(1,col.id,l_check_state));
        htp.prn('<label class="chkLabel">'||col.label||'</label>');
        if ind mod 2 > 0 then 
            htp.prn('</td>');
        else
            htp.prn('</td></tr>');
        end if;

        ind := ind +1;
    end loop;
    htp.prn('</table>');

end;

Create button submit, and then create process executed after submit to save items selection state like following

declare
  l_chk_selected_ids varchar2(3000);

begin
  FOR i in 1..APEX_APPLICATION.G_F01.count
  LOOP
    l_chk_selected_ids := l_chk_selected_ids || '#!' || APEX_APPLICATION.G_F01(i) || '#!';
  END LOOP;
 :P1_CHK_SELECTED_IDS := l_chk_selected_ids;
end;