Fix Windows Update High Processor Usage

How to Fix Windows Update High Processor Usage?

Windows Update can sometimes cause high processor usage (50%-100%).

Option 1

A possible cause that is worth checking is if you have run low on disk space.

Ensure that there several Gb of free disc space (ie. > 5Gb).

Option 2

If windows is running as a Virtual Machine, try updating the VM host application,
e.g. check for new version of VirtualBox.

Option 3

Check out this link, and try running Windows Update troubleshooter, useful link

Creating a Software Development Support Server

Software development requires a number of server based tools to support it. This post provides a quick list of such tools and then describes how to set them up.

  • Source Code Control
    • git, svn, mercurial
  • Continuous Integration
    • Jenkins/Huson
  • Static Analysis
    • Sonar

Git

Installation

This is probably already installed as it may come with the linux distribution (e.g. Ubuntu), else

  • sudo apt-get install git

Configuration

Apache

This is optional, but enables easier access to other services by providing proxies from port 80.

Installation

  • sudo apt-get install apache2
  • sudo a2enmod proxy
  • sudo a2enmod proxy_http

Configuration

none

Postgres

This is used later to support sonar.

Installation

  • sudo apt-get install postgresql postgresql-contrib
  • sudo apt-get install pgadmin3 (you will probably use the gui admin tool at some point)
    • At the time of writing, apt-get installs v9.4 of postgresql and 1.18 of pgadmin3,
    • Although this partially works, pgadmin3 core-dumped several times at me
    • This link has instructions for installing pgadmin3 v1.20
  • sudo -u postgres psql postgres (to configure the postgres admin user)
    • password postgres (from the psql propmt: to set the postgres user password)
    • CREATE EXTENSION adminpack; (to enable pgadmin to work with psql)
    • q (to quit psql)
  • Edit the file ‘/etc/postgresql/9.4/main/pg_hba.conf ‘
    • Change the line
      # Database administrative login by Unix domain socket
      local      all         postgres            peer
      to
      # Database administrative login by Unix domain socket
      local      all         postgres            md5

Other help:

Jenkins

Installation

  • wget -q -O – https://jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add –
  • sudo sh -c ‘echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list’
  • sudo apt-get update
  • sudo apt-get install jenkins

Configuration

  • edit /etc/defaults/jenkins
  • change ‘HTTP_PORT=8080’ to define the port you wish to use for jenkins, e.g. 8180

In order to get a proxy to jenkins from port 80, e.g. http://localhost/jenkins, we must configure the default apache Virtual host.

Add the following to the end of the <VirtualHost> element in the 000-default.config file typically found in ‘/etc/apache2/sites-available’.

ProxyRequests Off
AllowEncodedSlashes NoDecode
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyPass /jenkins http://localhost:8180/jenkins
ProxyPassReverse /jenkins http://localhost:8180/jenkins

Then restart apache

  • sudo service apache2 restart (or sudo /etc/init.d/apache2 restart)

Sonar

Installation

  • add ‘deb http://downloads.sourceforge.net/project/sonar-pkg/deb binary/’ to the end of the file ‘/etc/apt/sources.list’ (http://sonar-pkg.sourceforge.net/)
  • sudo apt-get update
  • sudo apt-get install sonar
  • edit file /etc/opt/sonar/conf/sonar.properties
    • find the following section and uncomment/modify the appropriate lines
      # User credentials.
      # Permissions to create tables, indices and triggers must be granted to JDBC user.
      # The schema must be created first.
      sonar.jdbc.username=sonar
      sonar.jdbc.password=sonar
    • find the following section and uncomment/modify the appropriate line as follows
      #----- PostgreSQL 8.x/9.x
      # If you don't use the schema named "public", please refer to http://jira.codehaus.org/browse/SONAR-5000
      sonar.jdbc.url=jdbc:postgresql://localhost/sonar
    • Modify the sonar port using the following lines
      # TCP port for incoming HTTP connections. Disabled when value is -1.
      sonar.web.port=8280
    • Modify the url path/contex
      # Web context. When set, it must start with forward slash (for example /sonarqube).
      # The default value is root context (empty value).
      sonar.web.context=/sonar
  • Create user ‘sonar’ in posgres
    • start pgadmin3
    • add a new Login Role with name and password as written above in the sonar.properties file
    • add a new database named ‘sonar’ (or named as written in the sonar.properties file)

Other help:

Configuration

  • edit /etc/defaults/sonar
  • change ‘HTTP_PORT=8080’ to define the port you wish to use for jenkins, e.g. 8280

In order to get a proxy to sonar from port 80, e.g. http://localhost/sonar, we must configure the default apache Virtual host.

Add the following to the end of the <VirtualHost> element in the 000-default.config file typically found in ‘/etc/apache2/sites-available’.

ProxyRequests Off
AllowEncodedSlashes NoDecode
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyPass /sonar http://localhost:8280/sonar
ProxyPassReverse /sonar http://localhost:8280/sonar

(Only the last two lines are required if you have the others already in the file)

Then restart apache

  • sudo service apache2 restart (or sudo /etc/init.d/apache2 restart)

Ports

Always a good idea to keep a record of what service is running on which port.

80 : apache
8080 : app server (wildfly, tomcat, etc)
5432: postgresql
8180: jenkins
8280: sonar

Developing and Managing Multiple Modules: Eclipse OSGI and Maven

Eclipse and maven do not play nicely with each other. They try, but there are some fundamental differences that make it tricky. I will first describe some of those differences. Then I will discus how I have gone about developing an managing Eclipse/OSGI bundles using maven and Eclipse.

Differences

Version numbers

  • Eclipse versions are numbered X.Y.Z.qualifier. The qualifier is optional and adds a timestamp.
  • Maven versions are also typically X.Y.Z-SNAPSHOT. The SNAPSHOT is optional and adds a timestamp.
  • These seem similar, the only difference being the use of a ‘.’ or a ‘-‘ to separate the main version number form the timestamp.
  • The problem occurs with respect to the interpretation of a qualifier/SNAPSHOT variant of a version.
    • In eclipse a qualified version is considered to be newer than a version without the qualifier.
    • In maven the SNAPSHOT version is considered a precursor, i.e. an earlier (development) version.

Dependency Scope

  • Eclipse/OSGI dependencies (defined in the MANIFEST.MF file) are not, by default, transitive.
    • I.e. if A depends on B which depends on C, then A does not, by default, have access to the definitions in C.
    • If B ‘reexports’ C, then A has access to it.
  • Maven dependencies (defined in the pom.xml file) are, by default, transitive.
    • I.e. if A depends on B which depends on C, then A does have access to the definitions in C.
    • If B makes the dependency on C ‘scope=provided’ then A will not have access to C, but to execute A, a runtime dependency on C will be needed.

Dependencies – which Version

  • Both Eclipse/OSGI and maven allow the version number of a dependency to be defined. In fact it is to be encouraged. However the class path may not be what you expect.
    • The OSGI framework/container/runtime allows for different versions of the same bundle/jar to coexist and hence  multiple different versions of the same code may be executed. I.e. Each bundle will use the version of the bundle it references at compile and runtime.
    • In maven, the class path will determined by its Dependency Mediation rules, meaning that you get the “nearest definition”. This means that at compile time you can force dependency on a particular version (by explicitly including a dependency on that version), however at runtime the class path may provide a different version of the jar.

Development

I am a big fan of Eclipse as a Java IDE. I find it works well for me. Except, when I want to use maven to build the modules.

One easy option is to use Tycho. This solution means that a developer can essentially avoid having to understand maven. Tycho takes over maven almost completely and in my opinion using Tycho cannot be said to be ‘using maven’. Tycho-on-maven and native/original-maven are best considered to be separate things.

Tycho has many advantages, but I find that it is slow to build, and releasing/deploying p2 repositories does not seem to be as simple as the nexus/maven approach. Hence my investigation into an alternative.

One thing I particularly like about maven is its directory layout. In particular, a maven module contains the source code for the modules alongside the tests for that module, and those tests are, by default, executed as part of the build. It actively encourages module-based unit testing. The deployed jar (sensibly) does not  include the test code.

The Eclipse m2e project goes along way towards making the use of Eclipse for the development of maven modules nice and easy. Except, when those maven modules are also developed as OSGI-bundles/eclipse-plugins.

My approach to building OSGI/Eclipse bundles using maven is as follows:

Code Layout

Use the standard maven folder and file layout structure.

MANEFEST.MF

    • Use the org.apache.felix:maven-bundle-plugin to generate a manefest file
    • or, put a hand written manefest file in src/resources and tell maven to include it in the built jar. Using the maven-jar-plugin

In either case, tell the eclipse IDE where to find the MANEFEST.MF file using the file .settings/org.eclipse.pde.core, for each project as shown below,

BUNDLE_ROOT_PATH=target/classes
eclipse.preferences.version=1

(change the value of BUNDLE_ROOT_PATH to be the location of the MANEFEST.MF file)

Handling the Differences

Version numbering: This should not really be an issue provided you only release “proper” versions (i.e. non-SNAPSHOT).

Dependency Scope: Limit the scope of maven dependencies to ‘provided’ unless you want the depended on artefact to be included in the exported API of your artefact.

Dependencies – which Version: Yet to find a solution?

Dependencies to existing eclipse bundles

Eclipse has a large number of existing bundles which are usually found in the eclipse IDE installation, or via a p2 site referenced in a .target file. If we build with maven, how do we tell maven about these bundles?

Generate p2 Repository

To generate a p2 repository from maven artifacts you can use the org.reficio:p2-maven-plugin. The configuration in the pom file tells the (maven) plugin which artefacts to include in the p2 repository.

Useful Links

Cloud IDE for DSLs

I am attempting to provide a cloud based IDE for DSLs. The first question is where to start from. There are many great Cloud-IDEs out there, but they are all aimed at supporting various programming environments. For DSL support, I want to find something that is minimal, that I can subsequently add to with DSL extensions.

My requirements would be:

  • Minimal starting point (similar to creating an Eclipse RCP)
  • Plugin architecture
  • Preference for ability to use Java in plugins
  • Open Source (of course!)

I have investigated the following as possible starting points, details are in separate posts (follow the links), but a brief summary is included here:

  • Cloud 9
    • Widely used
    • Mainly Javascript
    • Potential problem with the open source licence, currently unsure if I can legally use it!
  • Eclipse-Che / Codenvy
    • GWT and Java based
    • Runs in its own tomcat (! what…ridiculous!..but I have attempted to get round this)
    • Complex to reduce to a minimal IDE
  • Eclipse-Orion
    • OSGI at the server
    • Javascript on client
    • Complex to minimise
    • not that easy to add plugins
  • Codiad
    • php based
    • simplest starting point
    • plugins are easy
    • needs a php-java bridge to execute java
    • Can run it in tomcat with a little persuasion
    • tomcat 8 requires quercus war (for php interpretation) – which has GPL licence.

 

Examples: UML: Simple Hello World

Most programming language tutorials start with a very simple “Hello World!” program. This post discusses a very simple “Hello World!” UML model.

  1. UML is a modelling language (Unified Modelling Language). It is a graphical language that is designed for communication about software (though can and is also used for other things).
  2. A model is an abstraction of the real thing, and as such, a UML model is an abstraction of the software which it is describing.
  3. An abstraction of something hides, or simply doesn’t show, certain detail about that something. UML can be used to communicate a variety of different abstractions (views) of the software.
  4. UML is an Object-Oriented modelling languages, hence is best used to model Object-Oriented software. (Although various approaches for using UML to model non Object-Oriented software do exist.)

The Hello World program is, to quote wikipedia,

“used to illustrate to beginners the most basic syntax of a programming language. It is also used to verify that a language or system is operating correctly”

Similarly, I am using it here to illustrate some basic UML syntax. There a many different approaches to using UML. I show here an approach that I like and find to work.

First, let us look at the typical “Hello World!” program written in a number of different programming languages (many others listed here), thus helping us to form a UML abstraction that could be implemented in any of them. [The normal design process would involve constructing the UML model first, but this is not a normal design project.]

Java
public class HelloWorldApp {
  public static void main(String[] args) {
    System.out.println("Hello World!");
  }
}
C++
int main() {
  std::cout << "Hello World!" << std::endl;
  return 0;
}
C#
class HelloWorldApp {
  public static void Main(string[] args) {
    System.Console.WriteLine("Hello World!");
  }
}

Once these programs have been compiled, an executable (binary) is formed and this is what the user executes in order to run the program. Our first UML model is a representation of this executable:

The different things that make up a UML model are known as “model elements”. This model element is an called Artifact and has had the stereotype <<executable>> applied to it. [Follow the links for more detail about artifacts or stereotypes.]

This is a simple model, in UML. It abstracts away all the programming detail, and simply represents the executable. I think, however, that a little more model would be useful, perhaps something that shows a bit about the program that is manifest by the executable.

An option would be to add to our model a package, containing a class, which contains a static method,

Picture_of_Code

and we can show that the executable Artifact manifests this package,

Artifacts

However, although this is a perfectly valid UML model, there are a number things about this model that make it unsatisfactory to me,

  • It does not really form much of an abstraction from the code.
  • Neither is it a true representation of the code for each of the three programs shown above (C++, Java C#).
    • the primitive type String is named differently in each programming language
    • there is no class in the C++ version, nor does the code function require parameters

Looking back at our Hello World programs from above, these are all quite acceptable as a first HelloWorld program. However, although they are all written in Object-Oriented (OO) languages, they cannot really be said to be Object-Oriented programs.  They are all very similar to the C version of the program and thus not very Object-Oriented.

C
int main() {
  printf("Hello, world!n");
  return 0;
}

The (static) “main” procedure/method in each of the OO languages is there as an entry point to the program. If we are to execute a true OO program, this entry point procedure should be used to enter the program’s world of objects. In other words it needs to instantiate an initial (entry point) object and start the object executing.

Following this approach, the above 3 examples could be rewritten as follows:

Java
class Greeter {
  public void start() {
    System.out.println("Hello World!");
  }
}

public class HelloWorldApp {
  public static void main(String[] args) {
    Greeter greeter = new Greeter();
    greeter.start();
  }
}
C++
class Greeter {
  public: void start() {
    std::cout << "Hello World!" << std::endl;
  }
};

int main() {
  Greeter greeter;
  greeter.start();
  return 0;
}
C#
class Greeter {
  public void start() {
    System.Console.WriteLine("Hello, world!");
  }
}

class HelloWorldApp {
  public static void Main(string[] args) {
    Greeter greeter = new Greeter();
    greeter.start();
  }
}

Now all three of these follow a similar structure and a better abstraction can be formed.

We add a property to the executable Artifact which indicates the initial object that is instantiated when the program runs,

and we can now provide a class that is a better abstraction of all three programs,

ClassDiagram

This UML model is an abstraction that is only showing two aspects of the Hello World program,

  1. A view of the binary, or executable, artifact that is produced by a program compiler
  2. A class diagram that shows the class and method that is executed.

All other aspects of the program are not shown.

In a future post I will continue to use the Hello World example in order to illustrate the use of other parts of the UML.

Good Notations (A. N. Whitehead)

…by relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and in effect, increases the mental power of the race.

Alfred North Whitehead,
British mathematician,
logician and philosopher

Simple Transformations

A simple transformation is uni-directional from a source data structure to a target data structure.

SiTra is (in my biased opinion) the easiest way to write a simple uni-directional model transformation.

SiTra is a simple Java library for supporting a programming approach to writing transformations aiming to, firstly use Java for writing transformations, and secondly, to provide a minimal framework for the execution of transformations. SiTra consists of two interfaces and a class that implements a transformation algorithm. The aim is to facilitate a style of programming that incorporates the concept of transformation rules.

 

Software Development: Analysis

Analysis is the process (phase) of discovering what the system is intended to do, without getting drawn into how it is going to do it. One way to think of it is that analysis is the process of describing the problem for which we will subsequently design and develop a solution.

Generally there are many different stakeholders who place requirements on the system being developed, these may include, for example:

The end user (of the system)
The customer (who is paying for the system)
The business (who is developing the system)

There are three outputs from the analysis phase

Problem Space Concepts and Relationships
Use Cases and Actors
External Interfaces

A good starting point for performing system (or requirements) analysis is the text that is provided as the ‘Requirements’ from each of the stakeholders. Although text is a very ambiguous and thus poor means to express requirements, it is the easiest way to capture the initial communication of requirements.

The analysis phase in the development process is where we develop the initial ambiguous requirements into something precise that can be verified, and which enables an indication that the system has been finished.

Object-Oriented Analysis is a technique where we identify objects and relationships in ‘the requirements’ and express these in the form of a UML Class Diagram. These ‘Classes’ are not classes in the sense of ‘programming language’ (e.g. Java) classes, they are simply a way to represent a set (class) of objects that exist in the Problem Space for which we are developing a Solution.

Software Development: Examples: Hello World: Design

The design of the system is presented from a number of different viewpoints. There is no specific ordering of these viewpoints, they simply present different aspects of the design of system.

  • LogicalThe conceptual aspects of the system, uncluttered by deployment or target platform issues.
  • PhysicalThe physical, tangible, aspects of the system. I.e. what can be picked up and touched.
  • DeploymentHow does the Logical map to the Physical.
  • ImplementationThis viewpoint focuses on the organization of artefacts that are stored and need to be configuration controlled, e.g. source files, folders, binaries, config files, etc.
  • Runtime: The viewpoint focuses on the objects that exist or are constructed at/during runtime, i.e. whilst the system is operating.