Composite Liveability Measure — compare and contrast the prosperity, liveability and sustainability of places

The Composite Liveability Measure is designed to objectively evaluate the prosperity, liveability and sustainability of a particular place. It is sophisticated enough to illustrate just why one place scores more or less than another using ten dimensions/domains — five thematic domains (housing, economy, security, education and health) and five cross-cutting process domains (green, inequality, subjective wellbeing, services and civic engagement/governance).

We are keen to validate our methodology, and would like to invite researchers and statisticians to examine our methodology.  Find out more at www.coventry.gov.uk/cclm/

Security level: Public

More Blog Entries

Presenting performance to the public — Laria Annual Conference 2013

I'm co-presenting a workshop on how Coventry City Council is presenting performance...

Making awesome presentations and infographics

A number of examples include: an  animated infographic using Prezi for the Budget...

3 Comments

FM
Former Member 7 Years Ago
Interesting. We have been doing something along the same lines, but in our case was looking at wards within our area, rather than boroughs. I've two comments - I note that you are using a Z-score technique, as we do. However, many indicators have very non-stanard distributions - in particular with their distribution having very long 'tails' ie whilst most areas will have results around the mean, there will be some with values much higher than the mean. This will give a very high Z-score and these will tend to bias the results as these high Z-scores will effectively get a higher weight. In our analysis, we examine the distribution of the variable, and if the distribution is too non-normal we apply some sort of transformation eg taking the log of the value, to contain the extreme values of the distribution. This is to ensure that extreme values in an indicator don't over power the rest. Have you thought about something along those lines? Secondly, have you done any analysis to tell if you need all those variables? eg cluster analysis? In many cases, whilst you may think that all the variables tell you something, in reality they are all highly correlated and you'd get more or less the same results from only a few variables. In our case, we were often asked for extra variables, but found that in many cases, adding them didn't actually add anything to the analysis as they were so highly correlated with what was there already. Regards Tim Bounds Tees Valley Unlimited
Si Chun Lam 7 Years Ago
Hi Tim, Thank you for taking the time to look through the Coventry Composite Liveability Measure -- we appreciate it and we will check out your work in more detail -- I'm assuming it is the Community Vitality Index at https://www.teesvalleyunlimited.gov.uk/media/127635/community_vitality_index_-_october_2012.pdf / https://www.teesvalleyunlimited.gov.uk/instantatlas/CVI/atlas.html -- it looks very interesting and we'll take note. In terms of accounting for distributional effects, so far we have experimented with rank-based and equal interval quintiles (eventually, settled for rank-based quintiles) which does help account for extreme values. As a result, the top ranked and bottom ranked area is ranked five apart, to show that a top ranked area is not 152-times (or 326-times) better than a bottom-ranked area. We agree that fewer variables are better, and have previously used factor analysis to reduce the number of variables. Our starting point was actually the Index of Multiple Deprivation (IMD), in that a Liveability measure should simply be an inverse of the IMD. Eventually, with a brief to look at some more positive measures, we came up with an index with ten variables - one for each of the ten dimensions. However, there were some interesting - but unpalatable - correlations, for instance, between the rate of NEETs and the rate of crime -- which eventually resulted in a redesign to have a composite consisting of three measures per domain; developed in consultation with local services. We think this is a fair balance between a most statistically robust model and one that can be intuitively understood (though ten variables is still better!) Regards Si Chun Lam
Tim Healey 7 Years Ago
Hi Tim and Si, I've been looking at the log issue and I'm not sure it is overdoing the normalisation/standardisation to log and then z-score - it's a case of either Log normalising or z-score. z-score has the happy effect of both standardising and normalising data - as it brings all the indicators to the same base and, because it uses the distance of the actual score from the mean as a function of the standard deviation of the distribution, accounting for the spread of the distribution. The z-score also has the added benefits of being easily summed and readily recognised. The ranking points are finally allocated to quintiles so no actual points in the 50 point scale are ever any further than 5 apart. It's been really helpful to think these things through - we actually calculated how we wanted to do the analysis from first principles - following workshop discussion at LARIA 2012 on the effect of distributional effects in this analysis. We sought to convert raw scores to numbers which reflected the distribution so "invented" a thing called a Sigma Unit - which was the relative position of the actual score to the mean expressed in numbers of standard deviations - which, someone pointed out to me the other day is just what a z-score is. I did the same thing with Chi-square the other day!