Parallel Processing of Large Files using Rust

In the previous posts I have discussed Shared State and State Change:

Let us put all this to good use in this post! 

A standard use-case is where we have a large data file (e.g. CSV like this: Price Paid Data) that needs to be crunched where each row is transformed to a new row (there is no aggregation going on). The simplest processing paradigm is to do everything sequentially.

Step 1: Read next row from file

Step 2: Process row

Step 3: Write row to output file

Step 4: Go to 1 if not End Of File

At the end of the process we end up with a file having the same number of rows but different set of columns.]

This is easy to code up (few lines of Python!) but is impossible to scale. So as our file grows in size, longer we have to wait. Reading and writing in sequence will increase linearly with the size of the file. More processing we need to do in each loop longer the wait becomes.

If we have to process the file again and again (to apply different aggregations and transformations) things become more difficult. 

Figure 1: Creating scope for parallel processing, using an iterator (top) and chunks (bottom).

If the large data file is ‘read-only’ things become easier to process. We can do parallel reads without any fear. Even if we do sequential reads we can process each row in parallel and do sequential batching in the aggregate (which in this case is a simple row write) – where we don’t write one row but a batch of rows. This can be seen in Figure 1 (top). This is still not the best option. The iteration time will increase as the file size increases.

The real power comes from the use of a distributed file system, which means that we can fully parallelize the pipeline till the aggregation step (where we still batch write rows). This can be done by breaking file into blocks or chunks so we can iterate in parallel. 

The chunking of the file is still a sequential step but it needs to be done only once as the file is loaded into the File System. Then we can perform any operation.

I have attempted (as a means of learning Rust) writes a single iterator based program with parallel processing of the transformation function and batched writer.

There is a pitfall here, we can’t keep adding handlers as the overhead of parallelization will start to creep up. At the end of the day the writer is a single threaded processor so it cannot be scaled up and can only handle a given set of handlers at a time (depending on the production rate of the handler). 

There is a way of parallelizing the writer as well. We can produce a chunked file as output. This allows us to run parallel iterate-transform-write pipelines on each chunk. This can be shown in Figure 2.

Figure 2: Chunking for source and destination.

The file for the Rust program is below. 

Expectation and State Change

Continuing on from the previous post about shared state – let us try and get some concept of expected and actual state. This is looking in from the outside. 

Few assumptions first:

  1. State can be written or read.
  2. In terms of multiple processes writing to a shared state store – we only care about state read and state written and assume that half updates are not allowed at the lowest level – to simplify we are not looking at object updates where say out of three items two are updated to the new value but the third one is not, at the level of the object we have a partial update but at the level of the individual items we will either see the the new value or the old value – this is to simplify the discussion
  3. Reads can be with expectation or without expectation (other than the core expectation of ‘success’ and valid ‘format’) – either the reader uses the value read (e.g. in an ‘if’ condition) or simply transfers it on without using it (e.g. as access API) – what the reader does with the value is independent of the issue of shared state. 
  4. Writes can be with checks or not – either the writer doesn’t care about current state before writing the new state OR it does (so called compare and swap)

Therefore the standard model for un-secured parallel read-write of state involves all types of Reads (R) and Writes (W) to the same shared State (S). From the previous post we know the conditions that lead to issues with access to shared state.

Figure 1: What is the state written by Writers (W a,b,c,) and what is the state read by readers (R a,b,c)?

In Figure 1 there is no way of knowing what is the final state of S given three parallel writes by Wa, Wb and Wc. The expectation is that S is either equal to 1, 2 or 3. Similarly, we cannot know what values are read by parallel readers Ra, Rb and Rc. Expectation again is that the values are either 1,2 or 3. But depending on S, Readers may see non-initialised values as well (for example if read happens before any writes).

The writer in this case treats the State as a black box. Therefore, there are no expectations of current value of S and writes are done in a fire and forget manner.

One way of improving this is to make writes that do compare and swap (CAS) – that is forming an expectation of existing state before changing it.

Figure 2: Compare and Swap – with pitfalls

Figure 2 shows this at work. Two writers (W1 and W2) are writing to State S at the same time. We cannot be sure when they start and finish. Initial value of S = 100. Both are attempting to change the State of S from 100 to 200 and 300 respectively. Since these are overlapping writes we need to provide some sort of ordering to achieve consistency. 

To provide ordering we write with expectation. To build the expectation each writer must perform a read first. This initial read gives the writer a foundation for their own write. Usually the subsequent compare and swap operation is an atomic one with support at the lowest levels (e.g. see the sun.misc.Unsafe and its use by concurrency API in Java for CAS).

W1 initiates the write by forming an expectation of the current state (100). While it attempts the CAS operation on S, W2 also initiates the write by forming an expectation. As W1’s CAS has not finished yet the current state is still 100. W2 is then able to slip its CAS ahead of W1 (due to say thread priority). That compares expectation to actual and then sets S to new value of 200. Now W1 catches up and issues a CAS request. CAS checks the current state with the expected state – this no longer matches as current state = 200 and expected is 100. The operation fails and W1 is free to retry (with say exponential back-off).

Putting Expectations to Work

Lets put forward a question at this stage:

What value does R2 get when it tries to read the S? As we can see the value of S changes from 100 to 200 while the read is taking place. For example: If we had instantaneous reads then R2 should get the value 100.

The answer is that it depends on the specific implementation. 

We can use some sort of locking mechanism to coordinate between reads and writes. This would solve the parallel access issue by making sure there is only one reader or write at a given time. But this is not efficient as we are blocking parallel reads as well. 

The main issue is that we don’t want to lock large chunks (e.g. a full document or a full row) but we may have to as we don’t want partial state updates to be available.

To be ‘atomic’ in case of a multi-part object (like a document or a row with multiple columns) means seeing all the changes part of a ‘write transaction’ reflected or none at all.

For example if we have a document that contains details of a game player with: ‘games played’, ‘games lost’, ‘games won’, ‘no results’. Then if we don’t provide atomic behaviour when updating after a game we could allow reads that don’t make sense (e.g. when ‘games played’ is different from the sum of ‘games lost’, ‘games won’ and ‘no results’).

As we spread the scope of the transaction the atomic update guarantee requires locks on more objects.

The other solution (rather than object wide locks) is to use a pointer to the object from the ‘table’ or ‘index’. Here an object is immutable. New versions require creation of new objects and updating a reference to point to the new version.

A single reference value can be easier to update atomically (e.g. using Compare and Swap) than a whole object. Therefore there is no need to block parallel reads and writes if we can guarantee that at any time the pointer to the object will be updated atomically. In this case, reads will see either the new value or the old value.

Furthermore what if there are relationships between entities that need to be updated simultaneously (a group of interrelated objects being updated simultaneously) ? For example between game result data and player data or money sender and receiver?

In the above case we need to guarantee that all related pointer values (as required by the transaction) will be updated atomically. This is lot harder to achieve.

One way this could be done is by creating a duplicate set of inter-related objects with the newer versions. This allows reads to happen (writes are blocked so that we don’t have multiple competing new versions) while the updated versions are being created. The problem will happen when its time to switch all of the related objects to the newer version without exposing a mix of old and new versions.

For this the traditional solution is to lock all connections (for reads or writes) while the version references are updated. Practically this should be a quick operation as the new structures already exist. The old versions can then be removed permanently (usually this is a compaction procedure that releases storage space).

Housing Market: Auto Correlation Analysis

In this post we take a look at the housing market data which consists of all the transactions registered with the UK Land Registry since 1996. So lets get the copyright out of the way:

Contains HM Land Registry data © Crown copyright and database right 2018. This data is licensed under the Open Government Licence v3.0.

The data-set from HM Land Registry has information about all registered property transactions in England and Wales. The data-set used for this post has all transactions till the end of October 2018. 

To make things slightly simple and to focus on the price paid and number of transaction metrics I have removed most of the columns from the data-set and aggregated (sum) by month and year of the transaction. This gives us roughly 280 observations with the following data:

{ month, year, total price paid, total number of transactions }

Since this is a simple time-series, it is relatively easy to process. Figure 1 shows this series in a graph. Note the periodic nature of the graph.

Figure 1: Total Price Paid aggregated (sum) over a month; time on X axis (month/year) and Total Price Paid on Y axis.

The first thing that one can try is auto-correlation analysis to answer the question: Given the data available (till end-October 2018) how similar have the last N months been to other periods in the series? Once we identify the periods of high similarity, we should get a good idea of current market state.

To predict future market state we can use time-series forecasting methods which I will keep for a different post.

Auto-correlation

Auto-correlation is correlation (Pearson correlation coefficient) of a given sample (A) from a time series against other available samples (B). Both samples are of the same size. 

Correlation value lies between 1 and -1. A value of 1 means perfect correlation between two samples where they are directly proportional (when A increases, B also increases). A value of 0 implies no correlation and a value of -1 implies the two samples are inversely proportional (when A increases, B decreases).

The simplest way to explain this is with an example. Assume:

  1. monthly data is available from Jan. 1996 to Oct. 2018
  2. we choose a sample size of 12 (months)
  3. the sample to be compared is the last 12 months (Oct. 2018 – Nov. 2017)
  4. value to be correlated is the Total Price Paid (summed by month).

As the sample size is fixed (12 months) we start generating samples from the series:

Sample to be compared: [Oct. 2018 – Nov. 2017]

Sample 1: [Oct. 2018 – Nov. 2017], this should give correlation value of 1 as both the samples are identical.

Sample 2: [Sep. 2018 – Oct. 2017], the correlation value should start to decrease as we skip back one month.

Sample N: [Dec. 1996 – Jan. 1996], this is the earliest period we can correlate against.

Now we present two graphs for different sample sizes:

  1. correlation coefficient visualised going back in time, grouped by Year (scatter and box plot per year) – to show yearly spread
  2. correlation coefficient visualised going back in time – to show periods of high correlation

Thing to note in all the graphs is that the starting value (right most) is always 1. That is when we compare the selected sample (last 12 months) with the first sample (last 12 months).

In the ‘back in time’ graph we can see the seasonal fluctuations in the correlation. These are between 1 and -1. This tells us that total price paid has a seasonal aspect to it. This makes sense as we see lots of houses for sale in the summer months than winter as most people prefer to move when the weather is nice!

Fig 2: Example of In and Out of Phase correlation.

So if we correlate a 12 month period (like this one) one year apart (e.g. Oct. 2018 – Nov. 2017 and Oct. 2017 – Nov. 2016) one should get positive correlation as the variation of Total Price Paid should have the same shape. This is ‘in phase’ correlation. This can be seen in Figure 2 as the ‘first’ correlation which is in phase (in fact it is perfectly in phase and the values are identical – thus the correlation value of 1). 

Similarly, if the comparison is made ‘out of phase’ (e.g. Oct. 2018 – Nov. 2017 and Jul 2018 – Aug. 2017) where variations are opposite then negative correlation will be seen. This is the ‘second’ correlation in Figure 2.

This is exactly what we can see in these figures. Sample sizes are 6 months, 12 months, 18 months and 24 months. There are two figures for each sample size. The first figure is the spread of the auto-correlation coefficient for a given year. The second figure is the time series plot of the auto-correlation coefficient, where we move back in time and correlate against the last N months. The correlation values fluctuating between 1 and -1 in a periodic manner. 


Fig. 3a: Correlation coefficient visualised going back in time, grouped by Year (scatter and box plot per year), Sample size: 6 months

Fig. 3b: Correlation coefficient visualised going back in time; Sample size: 6 months


Fig. 4a: Correlation coefficient visualised going back in time, grouped by Year (scatter and box plot per year); Sample size: 12 months

Fig. 4b: Correlation coefficient visualised going back in time; Sample size: 12 months


Fig. 5a: Correlation coefficient visualised going back in time, grouped by Year (scatter and box plot per year); Sample size: 18 months

Fig. 5b: Correlation coefficient visualised going back in time; Sample size: 18 months


Fig. 6a: Correlation coefficient visualised going back in time, grouped by Year (scatter and box plot per year); Sample size: 24 months

Fig. 6b: Correlation coefficient visualised going back in time; Sample size: 24 months

Conclusions

Firstly, if we compare the scatter + box plot figures, especially for 12 months (Figure 4a), we find the correlation coefficients are spread around ‘0’ for most of the years. One period where this is not so and the correlation spread is consistently above ‘0’ is the year 2008, the year that marked the start of the financial crisis. The spread is also ‘tight’ which means all the months of that year saw consistent correlation, for the Total Price Paid, against the last 12 months from October 2018.

Secondly conclusion we can draw from the positive correlation between last 12 months (Figure 2b) and the period of the financial crisis is that the variations in the Total Price Paid are similar (weakly correlated) with the time of the financial crisis. This obviously does not guarantee that a new crisis is upon us. But it does mean that the market is slowing down. This is a reasonable conclusion given the double whammy of impending Brexit and on set of winter/Holiday season (which traditionally marks a ‘slow’ time of the year for property transactions).

Code is once again in python and attached below:

from matplotlib import pyplot as plt
from pandas import DataFrame as df
from datetime import datetime as dt
from matplotlib.dates import YearLocator, MonthLocator, DateFormatter
import pandas as pd
import numpy as np
from sklearn.cluster import KMeans, MiniBatchKMeans, DBSCAN
from sklearn.mixture import GaussianMixture

months = MonthLocator(range(1, 13), bymonthday=1, interval=3)
year_loc = YearLocator()

window_size = 12

def is_crisis(year):
if year<2008:
return 0
elif year>2012:
return 2

return 1

def is_crisis_start(year):
if year<2008:
return False
elif year
>2008:
return False

return True

def
process_timeline(do_plot=False):
col = "Count"
y = []
x = []
x_d = []
box_d = []
year_d = []
year = 0
years_pos = []
crisis_corr = []
for i in range(0, size - window_size):

try:

if year != df_dates["Year"][size-1-i]:

if year > 0:
box_d.append(year_d)
years_pos.append(year)
year_d = []
year = df_dates["Year"][size-1-i]

corr = np.corrcoef(df_dates[col][size -i - window_size: size - i].values, current[col].values)
year_d.append(corr[0, 1])
y.append(corr[0, 1])
if is_crisis_start(year):
crisis_corr.append(corr[0, 1])
x.append(year)
month = df_dates["Month"][size - 1 - i]
x_d.append(dt(year, month, 15))

except Exception as e:
print(e)

box_d.append(year_d)
years_pos.append(year)

corr_np = np.array(crisis_corr)
corr_mean = corr_np.mean()
corr_std = corr_np.std()

print("Crisis year correlation: mean and std.: {} / {} ".format(corr_mean, corr_std))
if do_plot:

fig, sp = plt.subplots()

sp.scatter(x, y)
sp.boxplot(box_d, positions=years_pos)

plt.show()

fig, ax = plt.subplots()
ax.plot(x_d, y,'-o')
ax.grid(True)
ax.xaxis.set_major_locator(year_loc)
ax.xaxis.set_minor_locator(months)
plt.show()

return corr_mean, corr_std

csv = "c:\\ML Stats\\housing_oct_18_no_partial_mnth_cnt_sum.csv"
full_csv = "c:\\ML Stats\\housing_oct_18.csv_mnth_cnt_sum.csv"

df = pd.read_csv(full_csv)


mnth = {
1: "Jan",
2: "Feb",
3: "Mar",
4: "Apr",
5: "May",
6: "Jun",
7: "Jul",
8: "Aug",
9: "Sep",
10: "Oct",
11: "Nov",
12: "Dec"
}


dates = list(map(lambda r: dt(int(r[1]["Year"]), int(r[1]["Month"]), 15), df.iterrows()))

crisis = list(map(lambda r: is_crisis(int(r[1]["Year"])), df.iterrows()))

df_dates = pd.DataFrame({"Date": dates, "Count": df.Count, "Sum": df.Sum, "Year": df.Year, "Month": df.Month, "Crisis": crisis})

df_dates = df_dates.sort_values(["Date"])

df_dates = df_dates.set_index("Date")

plt.plot(df_dates["Sum"],'-o')
plt.ylim(ymin=0)
plt.show()

size = len(df_dates["Count"])

corr_mean_arr = []
corr_std_arr = []
corr_rat = []
idx = []
for i in range(0, size-window_size):
end = size - i
current = df_dates[end-window_size:end]
print("Length of current: {}, window size: {}".format(len(current), window_size))

ret = process_timeline(do_plot=True)
break #Exit early




UK House Sales Analysis

I have been looking at house sales data from the UK (actually England and Wales). This is derived from the Land Registry data set (approx. 4GB) which contains all house sales data from mid 1990s. Data contains full address information so one can use reverse geo-coding to get the location of the sales.

Sales Density Over the Years

If we compare the number of sales over the years an interesting picture emerges. Below is the geographical distribution of active regions (w.r.t. number of sales).

Years 2004-2007 there is strong activity in the housing market – this is especially true for London (the big patch of green), South coast and South West of England.

The activity penetrates deeper (look at Wales and South West) as the saturation starts to kick in.
The financial crisis hits and we can immediately see a weakening of sales across England and Wales. It becomes more difficult to get a mortgage. Market shows first signs of recovery especially around London.  Market recovery starts to gain momentum especially outside London.
The recovery is now fairly widespread thanks to various initiative by the Government, rock bottom interest rates and a generally positive feeling about the future. Brexit and other factors kick in – the main issue is around ‘buy-to-let’ properties which are made less lucrative thanks to three-pronged attack: increase in stamp duty on a second house, removal of tax breaks for landlords and tightening of lending for a second home (especially interest-only mortgages).

Finally 2017 once can see that the market is again cooling down. Latest data suggests house prices have started falling once again and with the recent rise in interest rates it will make landing a good deal on a mortgage all that more difficult.

Average House Prices

Above graph shows how the Average price of Sales has changed over the Years. We see there is a slump in prices starting from 2017. It will be interesting to see how the house prices behave as we start 2018. It will be a challenge for people to afford higher mortgages as inflation outstrips income growth. This is especially true for first-time buyers. Given the recent bonanza of zero percent stamp duty for first time buyers I am not sure how much of an impact (positive) this will have.

Returns on Properties

Above graph shows how the returns and risks associated with a house change after a given number of years. It is clear that it is easier to get a return when a house is held for at least 5 years. Below that there is a risk of loosing money on the property. Properties resold within two years are most likely to make a loss. This also ties in with a ‘distress’ sale scenario where the house is sold without waiting for the best possible offer or in times of slowdown where easy term mortgages are not available.

Number of Times Re-sold

Above graph shows the number of times a house is re-sold (vertical) against the number of years it is held for before being re-sold. Most houses are re-sold within 5 years. But why a massive spike where houses are re-sold within 2 years? One possible explanation is that these are houses that are bought by a developer, improved and then re-sold within a year or so.

House Transactions by Month of Year

Transaction by month

What is the best time of the year to sell your house? Counting number of transactions by month (figure above) we can see number of transactions increases as Spring starts and continues to grow till the end of Summer. In fact 60% more houses are sold in Jun – Aug period as compared to Jan – March.

Transactions tend to decrease slightly as Autumn starts and falls off towards end of the year. This is expected as people would not want to move right after Christmas or early in the new year (winter moves are difficult!)

Infrastructure

I have used Apache Spark (using Java) to summarise the data from approximately 4 GB to 1-1.5 GB CSV files and then Python to do next round of aggregations and to generate the plots.

 

Next step will be to incorporate some Machine Learning into the process.

iProcess Connecting to Action Processor through Proxy and JMS – Part 1

The iProcess Workspace Browser is a web-based front-end for iProcess. The Workspace Browser is nothing but a web-application which is available in both asp and servlet versions. It does not connect directly to the iProcess engine though. It sends all requests to the iProcess Action Processor which is also a web-application (again available in a servlet and asp version). The Action Processor forwards the request (via TCP/IP) to the iProcess Objects Server which works with the iProcess Engine and processes the request. This arrangement is shown below (with both the Workspace Browser and Action Processor deployed in the same web-server).

Now this setup is fine in an ideal scenario but in most organizations web-servers are isolated (‘fenced’) from the core infrastructure (such as databases and enterprise servers). Usually the access to the core infrastructure is through a single channel (e.g. through a messaging queue server) with direct TCP/IP connections and port 80 requests from outside blocked. In that case you will need to deploy the Action Processor inside the ‘fence’ with the core infrastructure and setup a proxy system to communicate with the Workspace Browser (which will be sitting outside the ‘fence’). The proxy system will transport the HTTP request over the allowed channel (JMS in this example) and return the response. An example is shown below using JMS.

 To implement the above we need to create the following components:

1) Local Proxy – which will be the target for the Workspace Browser instead of the Action Processor which is sitting inside the ‘fence’ and therefore not accessible over HTTP.

2) Proxy for JMS – Proxy which puts the http request in a JMS message and gets the response back to the Local Proxy which returns it to the Workspace Browser.

3) JMS Queues – Queues to act like channels through the ‘fence’.

4) Service for JMS – Service to handle the requests sent over JMS inside the ‘fence’ and to send the response back over JMS.

You might ask why do we need a local proxy and why not call the BW Proxy directly. The reason is very simple. The BW Proxy and Service should be as uncluttered as possible, ideally their only task is to carry the request through the ‘fence’ and bring out the response. Any processing of the request and response should be done somewhere else (and as we shall see in the example there is a lot of processing required).

As we don’t want to fiddle with the internals of the iProcess Workspace Browser, we simply add a Local Proxy which does the processing of the request and response. Then we set the Workspace Browser to send all Action Processor requests to the Local Proxy. This means that the Local Proxy will ‘behave’ exactly like the Action Processor as far as the Workspace Browser is concerned.

To put it in one line: using a Local Proxy allows us to separate the ‘behavior’ of the Action Processor from the task of sending the message through the ‘fence’.

 

In the example to follow, we have:

1) JSP based Local Proxy (easy to code – no compiling required!).

2) BusinessWorks based Proxy for JMS

3) TIBCO EMS Server based queues.

4) BusinessWorks based Service for JMS

 

In the next part, the example!

 

Deploying to a Business Work Service Container

There are three ‘locations’ or ‘containers’ that a Business Work EAR can be deployed to. These are:

1) Business Work Standalone Service Engine

2) Business Work Service Engine Implementation Type (BWSE-IT) within an ActiveMatrix Node

3) Business Work Service Container (BW-SC)

The first two scenarios do not require any special effort during deployment and usually can be done through the admin interfaces (bw-admin for standalone and amx-admin for BWSE-IT). But if one wishes to deploy an EAR to a Service Container then we need to setup the container and make a change in the Process Archive. This tutorial is for a Windows-based system.

Before we get into all that let us figure out what a BW Service Container (BW-SC) and why one would want to use it.

A BW-SC is a virtual machine which can host multiple processes and services within individual process engines. Each EAR deployed to a BW-SC gets its own process engine. The number of such process engines that can be hosted by a container depends on the running processes and the deployment configurations. To give an analogy, the load that an electric supply (service container) can take depends on not just the number of devices (i.e. process engines) on it but also how electricity each device requires (processes running within each engine).

Keeping in mind the above, when using BW-SC, it becomes even more important to have proper grouping of processes and services within an EAR.

The standard scenario when you would use a BW-SC is for fault-tolerance and load-balancing. In other words, to deploy the same service (fault-tolerance) and required backend processes (load balancing) on multiple containers.  Also Service Containers can be used to group related services together to create a fire-break for a failure-cascade.

The first step to deploying to a BW-SC is to enable the hosting of process engines in a container. The change has to be made in the bwengine.xml file found in the bw/<version>/bin directory. Locate the following entry (or add it if you cannot find it):

<property>
<name>BW Service Container</name>
<option>bw.container.service</option>
<default></default>
<description>Enables BW engine to be hosted within a container</description>
</property>

The second step  is to start a service container to which we can deploy our EARs. Go to the command line and drill down to the  bw/<version>/bin directory. There run the following command:

bwcontainer –deploy <Container Name>

Here the <Container Name> value, supplied by you, will uniquely identify the container when deploying EARs. Make sure  that the container name is recorded properly. In the image below you can see an example of starting a container called Tibco_C1.

Starting Container

 

The third step is to deploy our application to the container (Tibco_C1). Log in to the BusinessWork Administrator and upload the application EAR. In the image below the test application EAR has been uploaded and awaits deployment.

EAR Uploaded

The fourth step is to point the process archive towards the container we want to deploy to. Click on the Process Archive.par and select the ‘Advanced’ tab. Go down the variable list and locate the bw.container.service variable which should be blank if you are already not deploying to a container.

Property Set

Type the container name EXACTLY as it was defined during startup. TIBCO will NOT validate the container name so if you set the wrong name you will NOT get a warning, you will just be left scratching your head as to why it didn’t work. In our example we enter ‘Tibco_C1’ in the box (see below).

  Property Defined

 Save the variable value and click on Deploy. Once the application has been deployed, start the service instance. That is it.

To verify that your application is running on the container, once the service instances enter the ‘Running’ state, go back to the command line and the bin directory containing bwcontainer.exe. There execute the following:

bwcontainer –list

This command will list the process engines running in any active containers on the local machine. The output from our example can be seen below.

Command Line Listing

We can see the process archive we just deployed, running in the Tibco_C1 container.

If you have any other containers they will also show up in the output.

Remember one important point: If a service container goes down, all the deployed applications also go down. These applications have to be re-started manually through the Administrator, after the container has been re-started.

 

Dealing with Job Consultants in UK (Part 3: Remaining Stages)

Part 1: Here
Part 2: Here
Part 3: Here

IMPORTANT ADVICE IS IN ITALICS

There are several stages (as mentioned before):

1) The Initial Call

2) Pre-interview Phase

3) Post-interview Phase

4) Finalization Phase

5) Bye-bye!

 The Pre-interview Phase

If the client likes your CV the consultant will contact you to arrange an interview date. Most companies reimburse rail ticket costs especially if the interview is in London. Make sure, before you select a date and time, you confirm whether your travel expenses will be reimbursed. Also find out the duration of the interview so that you can plan your return journey. This is especially important if you are working.

The consultant will also send you a detailed interview description and call you up to find out if you are all set for the interview. Make sure you know the location of the company and other logistic details.

Make sure you cover all the points and if you are at all confused by anything related to the interview (from the nature of the interview to what kinds of clothes you should wear), don’t feel shy and ask the consultant!

The consultant might want to meet with you before the interview. Try and meet as this may allow the consultant to give you some important tips before the interview.

The Post-interview Phase

The consultant will surely call you after the interview is over to discuss with you how it went and what you thought of the company. Make sure you are honest. Sometimes a consultant might be able to get you through to the next round if they have some influence with the company.

This is the second most important phase. Here most consultants change sides. Now they will push you hard to accept the job, provided obviously that you clear all the interviews (especially if there are mutliple rounds of interviews). Make sure that you are comfortable with the job and the company.

Most probably you would have visited the offices of the prospective employer and seen the work environment during the interview. By now you should have a good idea of whether this is the way ahead for your career or not.

Be very clear and honest with the consultant. Don’t come under pressure and remember your consultant will show you the positive side of taking up the job. You have to find out the negatives and balance it out with the positives to ensure it is the right move for you!

If you are confused TAKE SOME TIME TO THINK ABOUT THE JOB OFFER!

Every consultant will give you time to think but just remember that most consultants will expect a definitive yes or no afterwards.

 

Finalization Phase and Bye-bye!

If you accept the offer the consultant will keep you updated as to what is required next. There might be some paperwork or a meeting with the employer and consultant. This is usually the shortest phase and the consultant’s work is almost done.

Most consultants remain in touch at least till you start at the new place. They might get in touch later if there are any issues or just to take feedback from you about the job and to find out if all is going well.

SQL Server Express 2008: Timeout and Pre-login Errors

If you are getting Timeout or Pre-login Errors when trying to access MS SQL Server Express 2008 from a Windows Vista based client then read on!

Some of the reasons behind these errors (and the solutions) are given here:

http://technet.microsoft.com/en-us/library/ms190181.aspx

If the above does NOT resolve the error (as in my case) then before you start cursing Microsoft try this:

Go to the Network and Internet –> Network Connections

Right click on the connection being used (WiFi or LAN or any other).

I found IPv6 and and Link-Layer Topology protocols enabled which I disabled.

This partially resolved the issues. Database access was quite stable and I was able to do some of the work. But still things were not 100% normal. I was loosing the wireless connection and it was showing me connected to an ‘unidentified network’. Even the internet stopped working if the computer went into standby.

This it seems is a very common problem and more info can be found on the Microsoft site.

This problem results from

A) Issues with DHCP implementation difference between Vista and XP.

Read for solution: http://support.microsoft.com/kb/928233

B) Issues with power management profiles and WiFi networks

Read for solution: http://support.microsoft.com/kb/928152

The Door That Wouldn’t Open….

At work we have code-lock doors (doors with numeric keypad). So whenever you want to move from one area of the office to the other you have to pass through one or two code-locked doors.

The weird thing was that sometimes the door would open after entering the code once but most times I would get stuck and need the code to be re-entered.
One day, I was feeling bit frustrated and the doors were getting stuck again and again.
As I was re-entering the code, for the nth time that day, I thought ‘man what a day I am having, even the doors here are blocking my way today’.
Few days later as I entered the code and the door got stuck. This time instead of re-entering the code I just pushed the door firmly. To my amazement the door opened without requiring the code to be re-entered.
It made me realise that similar things happen in life with all of us. The same problems keep bugging us. We keep getting stuck. But we should never loose hope. Why? Because just like that door, if we just make ourselves strong and push through these little blocks on the road to life then all doors will open!
🙂