Skip to main content

Tips 'n' Tricks

Go Search
Tips 'n' Tricks
Microsoft Business Architecture
Rose Solutions > Tips 'n' Tricks > Posts  


Modify settings and columns
Use the Posts list for posts in this blog.
data matchingUse SHIFT+ENTER to open the menu (new window).
3/17/2014 4:37 PMApproved
Decimal, Integer - % or +- difference [always cover rounding]
date - range of days
string - ???
DQS does similarity comparisons based on multiple influences, including known values, and synonyms you have added into the knowledge base on the Domain Values tab, and besides the similarities of the spelling of the words.
Instead of writing much more, I am not sure if you have seen Matching in action, but these videos show it to help illustrate the feature:

I want date to consider when the year is wrong, or the day and month are reversed, or a digit is omiited
I want string to consider a digit skipped or added, or a single digit being incorrect
Lets also have phonetic (sounds like) and synonyms

Source: the SharePoint 2010 list of phonetics and synonyms of each word are held in a text file, about 25MB in size in the following location;
C:\Program Files\Microsoft Office Servers\14.0\Bin\languageresources.txt

Watch for regional english settings
Khara notesUse SHIFT+ENTER to open the menu (new window).
12/2/2013 7:08 PMApproved
L10 - Dungeon Master [II, R2C]
- Red Rackuf Name Tag
- Broken Trident
- Bell Collar
- Ged. Necklace
- Large Insect Jawbone
Held: Burnt
Done: Gold, Steel, Cry, Wood
L45 Ripped Cape
Held: Burn, Woodt
L?? [II, R1T3]
- Ripped Frej Rove
- Bell Collar Ornament
- Ungoliant Jawbone
- Leviathan's Heart
Held: Darken and Wood
Done: Gold, Steel, Chry
L41 [R3,T4]
- Ripped Captains Hat
- 6 Pirate Tokens
Held: Wood
Done: Gold, Steel, Cry
3 Shabby Bags
Held: Darken has 2
Deviruchi PB I-IV
Held: Gold
Wiz Elix with Pale
Str Elix with Gold
L?? 30 Cards [I, R2,B4]
Held: Pale
Done: Steel
L?? 3 cards [II, L1B3]
- Leviathan
- Moonlight Flower
- Gedenhard
- Ungoliant
Held: -
Done: -
L46 - Masks
III L4, T4
III R2, B3
Held: Gold
4 sets of goblin masks Held: Steel
BeastmasterUse SHIFT+ENTER to open the menu (new window).
12/2/2013 7:07 PMApproved
4   3   5
1   5   1
5   1   1
5   5   1
5   5   5
RangerUse SHIFT+ENTER to open the menu (new window).
12/2/2013 7:05 PMApproved
1   3   1
1   5   1
1   1   1
5   5   1
1   1   1
5   1   5
Either the 2 bold ones, or the 2 italics
WizardUse SHIFT+ENTER to open the menu (new window).
12/2/2013 7:02 PMApproved
1   5   1
1        1
1   5    1
1   5    3
1   5    0
1   0    3
RogueUse SHIFT+ENTER to open the menu (new window).
12/2/2013 6:59 PMApproved
1   5   1
0        0
0   5   5
3   3   5
0   5   0
0   5   5
CresentiaUse SHIFT+ENTER to open the menu (new window).
12/2/2013 6:54 PMApproved
5   2   1
1   1   3
1   5   0
1   5   5
1   5   0
1   0   0
5   3
3   3
Tracking matsUse SHIFT+ENTER to open the menu (new window).
12/2/2013 6:52 PMApproved
- Master Red requires Radiant Firm Cry
- Master Str and Expert Int require Dazzling Trace Cry
- can convert powder to cry to piece (both oridecon and elenium)
- Punches require Radiant Cry
- Vit Rune: Dazzling Trace Cry and Radiant Cry
- Int Rune: Et Life Cry and Radiant Cry
- Str Rune: Radiant Firm Cry and Radiant Cry
Accelerated LevellingUse SHIFT+ENTER to open the menu (new window).
12/2/2013 6:49 PMApproved
- Have crafters provide gear
- Ignore Kharas
- Use Battle Order scrolls - the timer only runs while the character is logged in
- At level 10, get the Dungeon Master achievement, and equip the title
- At level 41, Pirate Hat and Tokens, or Aye Captain
- At level 45, Avenger (ripped cape)
- Get to 98 pets to get the improved timeout
SQL for SharePointUse SHIFT+ENTER to open the menu (new window).
1/30/2013 8:31 AMApproved

In case that link fails:

Key Takeaways from my session


During my session, I made a reference to several resources:

Awesome links - UXUse SHIFT+ENTER to open the menu (new window).
12/4/2012 10:58 AMApproved
Some great quotesUse SHIFT+ENTER to open the menu (new window).
12/4/2012 10:47 AMApproved

When the train goes through a tunnel and it gets dark, you don't throw away the ticket and jump off.. You sit still and trust the engineer.
Corrie Ten Boom

To state the facts frankly is not to despair the future nor indict the past. The prudent heir takes careful inventory of his legacies and gives a faithful accounting to those whom he owes an obligation of trust.
John F. Kennedy

A wedding anniversary is the celebration of love, trust, partnership, tolerance and tenacity. The order varies for any given year.
Paul Sweeney

He that takes truth for his guide, and duty for his end, may safely trust to God's providence to lead him aright.
Blaise Pascal

Trust your own instinct. Your mistakes might as well be your own, instead of someone else's.
Billy Wilder


XL Dynamic list of valuesUse SHIFT+ENTER to open the menu (new window).
12/4/2012 10:42 AMApproved
The arguments used in this Offset function are:
  1. Reference cell: Sheet1!$A$1
  2. Rows to offset: 0 [1 if there is a header]
  3. Columns to offset: 0
  4. Number of Rows: COUNTA(Sheet1!$A:$A) [-1 if there is a header]
  5. Number of Columns: 1
Architecture Use SHIFT+ENTER to open the menu (new window).
12/4/2012 10:34 AMApproved

Architectue: A set of principles, rules, standards and guidelines expressing a vision and implementaing concepts and containing a mixutre of style, engineering and construction principles.

Architecture is the limiataion of the possible solutions to those that meet predefined criteria of cost, usability, style, material, quality and any other agreed criterion.

Reuse of experience is key - we can't afford to merely learn by doing

- subdivide problems into domains
- identify (and track) objectives
- use principles
- use visualizations to provide complex material in ways that support decision making

risk mitigation
business driven
alignment with corporate vision
reduced complexity
communication enabler - sharing the vision

Steps in Contextual DesignUse SHIFT+ENTER to open the menu (new window).
12/4/2012 10:33 AMApproved
Characterize your user population, collect and consolidate sequences for a task analysis, vision a solution, work out the details by storyboarding, mock it up in paper, and test it in six to ten weeks.

Contextual Interviews with Interpretation
 Sequence Model with Consolidation
 Affinity Diagrams
 Wall Walk and Visioning
 Paper Mock-up Interviews with Interpretation
Working notes on educating BAsUse SHIFT+ENTER to open the menu (new window).
11/28/2012 8:22 AMApproved

A bit more positioning, so they get WHY they should care (and how much).


 How about an industry view: - we should EXCEED the knowledge level needed for basic at

They have 3 levels: the higher two are outlined in

The basic level can be seen on (its easy) 

Use cases need to be supplements with storyboards



 Early requirements work should position to overall type of HAS we are concerned with, and narrow our focus.

Event-driven? Single user focus? Mass production?

 Is the logic exception based, or rule centric? 


For a “rigid” element (which might be fully automated using a rules engine), I want states and the actions which move between them. We should be able to draw it all easily. This is traditional drawing. [buy stuff on amazon]

 For a “semi-flexible” element (like adaptive case management), I want a central construct and the options I can take – that gives me states and actions, but they won’t connect up, since the human can combine them in different patterns. [even using word (or windows) is a bit like this: so many different options user can combine]

For an even more flexible environment (such as SOA), I want service contracts


Orchestration v Workflow; Message Broker; Spoke and Hub v  

Why would I put things close to the user? (Design)

Waterfall requirements vs incremental vs iterative vs “agile”



The annual licensing fee is $999, and the fee for Corporate Members is $750

We could then use the model to review our guys - and the client.

Social and BPM?Use SHIFT+ENTER to open the menu (new window).
7/27/2011 1:21 PMApproved

1. Social by Design: Collaboration around process improvement [chosen member, easy contribute, feedback mechanism?]


2. Design by Doing; Collaboration around ‘getting a job done’ [value of subscription and possible transparency rather than narrowly focussed email]

3. Social Network: Social networking within the organization [reducing email]

Using the cloudUse SHIFT+ENTER to open the menu (new window).
7/27/2011 10:29 AMApproved
XMarks (per browser)
Sync bookmarks across browsers and machines
LastPass (I pay, but free is good)
Saves usernames and passwords
PlaxoCalendar (pay for it)
Sync appointments and contacts (run manually) across iPhone, multiple Outlooks, etc (I login via Facebook)
Dropbox (2GB free)
Your folder in the cloud
Adobe SendNow (free, but 100MB limit per file, and each can only be up for 7 days)
Send large files
[competes with GoogleDocs, YouSendIt, TransferBigFiles]
Unable to connect to the network/internet?Use SHIFT+ENTER to open the menu (new window).
7/27/2011 10:20 AMApproved

(A stack change introduced to Vista/2003 establishes a default route which gets a higher priority than one set by DHCP, and prevents the machine connecting. It can recur after installing a variety of software apps, and some apps (temporarily) fix it.)


Click Start->All Programs->Accessories, right-click on Command Prompt and choose "Run as administrator". Then enter the following command.

      route print

Please check if is listed as gateway in the table. If so, please check the "Network destination" and "Netmask" of that line, and enter the following command.

      route delete -p [Network destination] mask [Netmask]

Add templates to the 2010 RibbonUse SHIFT+ENTER to open the menu (new window).
7/27/2011 9:55 AMApproved
If you have several templates, add a custom group to the Ribbon for easy access. The first thing you need is a macro; use the following sub procedure as a guide:

Sub OpenCustomTemplate()

  'Open template from a macro button

  'added to the ribbon.

  Set myFolder = Session.GetDefaultFolder(olFolderInbox)

  Set myItem = myFolder.Items.Add("IPM.Form.NewsletterTemplate")


End Sub

Update the Set myItem statement to reflect the name of your template but keep the IPM.Form component (that’s the class). With the macro in place, add a custom group to the Ribbon as follows:

  1. Click the File tab and then click Options under Help in the left pane.
  2. Select Customize Ribbon in the left pane.
  3. Click New Group, choose Rename, enter a name such as Templates, and click OK. By default, Outlook will position the new group at the end of the Ribbon. You can use the arrows to the right to move it.
  4. With the new Templates group still selected, choose All Commands from the Choose Commands drop-down and select Macros.
  5. Select your macro and click Add.
  6. Right-click the macro (in the Templates group now), choose Rename, and enter a short, but descriptive name.
  7. Click OK.

Outlook will place a Templates group on the Home tab, as shown in Figure A.

Figure A

Add a new group for templates to the Ribbon.
Mapped path?Use SHIFT+ENTER to open the menu (new window).
6/8/2011 4:10 PMApproved
Even works in a locked down XP environment (using a user variable).
Eight golden rules of interface designUse SHIFT+ENTER to open the menu (new window).
3/22/2011 10:32 PMApproved

1. Strive for consistency

2. Cater to universal usability

• help new users through basic procedures

• enable frequent users to use shortcuts

3. Offer informative feedback

• All actions should result in system feedback

4. Design dialogues to yield closure

5. Offer error prevention and simple error handling

6. Permit easy reversal of actions

7. Support internal locus of control

• make users the initiators of actions, want users to feel they are in control

8. Reduce short term memory load

• 7 items +/- 2


USB Boot DiskUse SHIFT+ENTER to open the menu (new window).
3/4/2011 12:44 PMApproved
Notes on ThemesUse SHIFT+ENTER to open the menu (new window).
11/18/2010 10:42 AMApproved




And then

Note the CSS reference list - Heather is a god -




Serve's js tool for building one -

[simplifies the colour configurations etc to something useable]



[One note - if saving the theme out of the theme builder causes an exception, see the note at the bottom of this link for a workaround -]




2007 - To apply a theme to all subsites, follow steps in


2010 - Detailed information on how theming works in 2010 - and

Set up my emailUse SHIFT+ENTER to open the menu (new window).
11/2/2010 11:50 AMApproved

email address

tick tick basic

Hallmarks of a True Facebook application Use SHIFT+ENTER to open the menu (new window).
8/31/2010 11:41 AMApproved


  • Notify friends through mini feed on installation and selective activity as it happens through your application
  • Canvas page to merchandise your widget to Facebook users
  • Application integrates into user’s profile pages - doesn’t just sit on top
  • User comments get written to the Wall for everyone to see
  • Your application takes on familiar Facebook look and feel
And hence the social applications of the future???
Web 2.0 PhilosophyUse SHIFT+ENTER to open the menu (new window).
8/31/2010 11:23 AMApproved
• Simplicity over Completeness
• Share over Protect
• Advertise over Subscribe
• Early Availability over Correctness
• Select by Crowd over Editor
• Honest voice over Corporate Speak
• Participation over Publishing
• Community over Product
Sourced from a recent IT Leadership conference on August 24th, 2006 via
Requirements ToolsUse SHIFT+ENTER to open the menu (new window).
8/24/2010 11:57 AMApproved
  • R
Annoying authentication promptsUse SHIFT+ENTER to open the menu (new window).
8/19/2010 1:03 PMApproved
Situation: each browser session you have to log in again at a SharePoint site. You also have to log in for accessing documents.
[IE8 on Win7]
Explanation: [My situation] If the site you are using isn't in the same domain as your machine login, Microsoft's heightened default security may prevent it passing your credentials to automatically authenticate.
Solution Description: There is a registry key where you can list URLs for the specific purpose of forwarding the credentials. The list should be created as restrictively as possible to avoid any security issues. Also, because there is no specific deny list, the credentials are forwarded to all the servers that match this list.
In my situation, I am going to grant access to, and - neatly avoiding any wildcards.
[for details on constructing the URL list, see the slightly unrelated - this is not just a Vista thing. That thread also identifies the registry key to modify.]
This fix requires a reboot to take effect.
1. May have stuck with wrong credentials - fix it in Control Panel\All Control Panel Items\Credential Manager
2. Mark the site as trusted
3. Check you're not set to require prompting in Internet Options.
SharePoint: Repair failsUse SHIFT+ENTER to open the menu (new window).
2/18/2010 5:29 PMApproved
Sometimes you just gotta try a repair, and you get an error telling you to use a valid copy of "osrvmui.msi".
Turns out the INSTALL PATH was saved in the registry at the original install, and it wants the media to be there!
HACK: MSI files are self-registering, and typically bootstrap the other required files.  Go through the installation media, running all the MSI files.  Now run the repair...  Bingo.
Annoyed by WSP files?Use SHIFT+ENTER to open the menu (new window).
1/29/2010 11:02 AMApproved
Make a registry entry so they get opened as if they were CABs, without endlessly changing the extension to and fro...
Windows Registry Editor Version 5.00
[Using on Windows2008ServerR2]
MOSS2007 on Win2008R2Use SHIFT+ENTER to open the menu (new window).
1/22/2010 9:44 AMApproved
R2 requires certain SPs, which as of writing were not available all-in-one.  It (mainly - the excpetion is SQL - see below) won't let you install and update.
download the sps, and use the /extract:<dir> syntax to uncompact them.  Put em into the 'updates' folder in the MOSS source - they are now "slipstreamed"
You don't need to install WSS seperately anymore- the MOSS install has that - but you do need the WSS SPs, and you need to MANUALLY remove the wsssetup.dll (conflicts with the moss version, which is a superset)
HOWEVER, MOSS will only install SQL2005 Express x86:
  1. If you want 2008, non-express, or 64 bit, you need to install that first manually.
  2. If you let MOSS do the default thing, it will instruct you to install SQL2005SP3 after the install (but before the config wizard) (and remember, that's not the normal SP3, it's the special one for SQL Express)
Running up yet another Windows 2008 ServerUse SHIFT+ENTER to open the menu (new window).
1/22/2010 9:07 AMApproved
A bit of doco
  1. I like Daniel Petri's SendTo Toys - great for getting a full path to paste into doco -
  2. Make sure to add Notepad to the list!
  3. Manually reconfigure the dos prompt defaults to make it easier to read and turn on copy/paste
  4. Consider some of the perfomance changes for a workstation:
  5. Create a shortcut to the 12 hive, and put stsadm on the path
  6. Create my standard quicklinks
  7. Add search providers for:
Val-IT InitiativesUse SHIFT+ENTER to open the menu (new window).
1/14/2010 12:43 PMApproved
Value Management
- Define the Value Propositiion for Value Management
- Define the roadmap for introducing value management
- Identify executive sponsor and 'champions'
- Comms program
- Determine funds needed
- Informa nd commit leadership
- Align and integrate with financial planning
- Processes, roles, responsibilities
- Framework, Portfolio, and Monitoring
- Close the loop
Inventory of Investments
- Assess and score curreent and candidate investments
- Make this part of the normal selection process
Clarify Value
- Benefit assessment of 'in flight' investments - track and update
- Delivery assessment of 'in flight' investments
- Define and implement reporting and review regime
Windows7 GodMode Control Panel ItemsUse SHIFT+ENTER to open the menu (new window).
1/7/2010 8:07 PMApproved

To create the Godmode folder, create a new folder and change its name to this string:


Now these were 'published' about Windows7, but many work on the server...

Here's the list of strings:  - and if they work in Windows2008R2, what they do!

{025A5937-A6BE-4686-A844-36FE4BEC8B6D} - PowerPlan
{05d7b0f4-2121-4eff-bf6b-ed3f69b894d9} - taskbar Notifications
Vault.{1206F5F1-0569-412C-8FEC-3204630DFB70} - Windows (Credentials) Vault  <- AWESOME!!!
{15eae92e-f17a-4431-9f28-805e482dafd4} - Install program from the network
{17cd9488-1228-4b2f-88ce-4298e93e0966} - Windows Default Programs
{1D2680C9-0E2A-469d-B787-065558BC7D43} - Opens the GAC
{208D2C60-3AEA-1069-A2D7-08002B30309D} - Network
{20D04FE0-3AEA-1069-A2D8-08002B30309D} - Disks on Local
{2227A280-3AEA-1069-A2DE-08002B30309D} - Printers
Remote.{241D7C96-F8BF-4F85-B01F-E2B043341A4B} - Remote  <- I like this one
{4026492F-2F69-46B8-B9BF-5654FC07E423} - Firewall


Oh, and once the icon changes, if it didnt remove the GUID from the name, you can safely do that.

The Lead Developer Model Use SHIFT+ENTER to open the menu (new window).
11/24/2009 2:05 PMApproved
The Lead Developer Model is a venerable and respected way to organise work inside a development team.
It is heavily used in organisations like Microsoft.
In this model, work packages are allocated to a lead developer, who is then responsible for fullfilment of the work. The lead developer will allocate work to the other team members, and coach them through their completion of the work allocated to them. 
[Benefit: This model creates healthy incentives for transferring skills, while ensuring tasks are managed by a more senior developer.]
All work should be peer-reviewed at least every 2 days - this keeps the reviewer fresh on what the work item/package is about.  Note that this includes the lead developer getting their work reviewed by the others, which is a great knowledge transfer opportunity.
[Benefit: in case of emergency, the reviewer should have enough knowledge to ensure work doesn't get lost]
Where there is no clear "seniority", developers may choose which of them will be the "lead" for a particular project phase.
The normal case is that each lead developer has one or two other developers working with them - it is unusual to put more developers into one group, since that starts to reduce the work efficiency of the lead developer (turning them into a manager). With 4 developers, for example, the ideal structure would be to have two leads, each with one other developer.
Both lead developers and junior developers usually report increased job satisfaction fromt his model: leads because of recognition and the chance to delegate some of the work, and juniors because they get more guidance about what is expected from them, and the chance to understudy a more senior developer.
Sociability testing - generic application built on SharePointUse SHIFT+ENTER to open the menu (new window).
11/24/2009 2:02 PMApproved
* server
 - other apps already present on the servers involved [ie. using the same database] - do they still work?
 - how are other apps affected when this app is under load?
* network
 - performance from home, from overseas, via citrix
 - performance when network is busy
 - performance when this app is under load

* client
 - windows explorer ability to open libraries and interact with files
 - save/saveas from inside Word/Excel/etc
   (a) created externally
   (b) opened from a library
   (c) created from the new button inside a library
 - document library ability to open in windows explorer (from SharePoint UI) 
   [exact behaviour varies with permissions, versioning etc]
 - sync with Outlook command - what happens?
 - link to a SharePoint list from Access

Tools I like with SharePointUse SHIFT+ENTER to open the menu (new window).
11/20/2009 11:48 AMApproved

Tools I would like: (bold ones are more urgent than the others)   - block all users except for members of a specific AD group from using SharePoint Designer on prod  - deploy and manage InfoPath forms easily  - add HTML snippets (such as video) into a SharePoint enhanced rich text field  - solution pack generator  - deploy/retract etcetera the solutions  - attach the context menu to ID instead of title for those times you don't want it.  - WebService allows InfoPath to know the SharePoint groups of the current user - great for adjusting views on the fly - lookup field with picker



bunch of SharePoint designer custom activities from (includes a solution to the approval to change permissions issue for KMMS)


Ones that cost:

DocAve -

OR echoforsharepoint (

in particular BDC metaman





for infopath -



Please download and install in a dev environment the solution at

Creating SCORM contentUse SHIFT+ENTER to open the menu (new window).
11/20/2009 11:47 AMApproved (includes download, samples, etc)

Should we also consider a Managesoft package for it? - Maybe that should wait for the team downstairs to decide whether it is welcome in the environment or not...

(It's major pluses are that it is free, it is architecturally benign, and normal users can use it)

FYI, there is a maintenance forum -

digital certificate Use SHIFT+ENTER to open the menu (new window).
11/20/2009 11:46 AMApproved

Windows comes with the full infrastructure to run an internal, enterprise level digital certificate environment.  DFAT is fully licensed to use this. DFAT already runs the necessary infrastructure, and uses it to provide server certificates and signing certificates (as ADS found out about the InfoPath forms and KMMS).

DFAT could, at very low effort, roll out digital certificates for every domain user.

This could be done automatically, without any user involvement, if that was desired.   (New in Windows Server 2003) This would open the door to a number of scenarios, where the certificate could be used by an application in response to a user event, without the user needing to know about it at all. 

Alternatively, the user could be asked to specify a special password/key, which would have to be entered whenever they wanted to use their certificate.

For further information, see

[I assume that the supported devices we are talking about are PCs - you can also deploy certificates on other devices, but effort might increase.]

Why is it timely to consider this now?

Exchange and Outlook provide extensive support for digital signatures, and are being planned at the moment

InfoPath and SharePoint provide support for digital signatures, and are being used for eForms and related projects

Scenarios where certificates can help:

Digitally sign a form, such as approving leave. This is a legally acceptable alternative to printing documents out and signing manually, and would assist us in moving to greener alternatives than printing everything out.

Corporate applications could use the digital signature rather than merely account logon in situations where a higher degree of fidelity was desired.

Digitally sign communications, allowing us to flag as suspect any messages which are not signed.  This is a mandatory capability to protect from email spoofing/alteration (ala Godwin Gretch).

A variety of encryption capabilities...


How does this relate to Gatekeeper, the federal government standard for digital certificates?

Gatekeeper compliant certificates cost money, and add significant administrative overheads.

Gatekeeper requires the agency to use their certificates for external communications: it has nothing to do with internal scenarios.

From page 35 - - accessed 2/11/2009:

Where Agencies determine that PKI is the appropriate authentication

mechanism for external purposes, digital certificates must be issued by a

Gatekeeper Accredited/Recognised Service Provider and comply with

Gatekeeper Policies and Criteria.

Version control for peoplesoft Use SHIFT+ENTER to open the menu (new window).
11/20/2009 11:44 AMApproved

You also raised the issue of version control for peoplesoft work, which is a vexed issue in most environments. Peoplesoft's infrastructure has thousands of parameter values stored against hundreds of tables - not at all easy to version control changes. I am only aware of 2 approaches:

- STAT from Quest Software (

- apparently there is a tool which does a compare, and generates an instruction file, which can then be applied against different environements. That file could be stored in any version control system. (

Differencing in TFSUse SHIFT+ENTER to open the menu (new window).
11/20/2009 11:43 AMApproved

TFS comes with a built in tool for doing differencing -

Most people find it to be dreadful. If we have Beyond Compare approved for use in the environment, then I would like to see it bound into TFS as the default tool... (instructions follow)

To replace the diffmerge tool, in Visual Studio go to Tools > Options > Source Control > Visual Studio Team Foundation Server and select Configure User Tools. Select Add and enter the extensions that should be used for (in this example, both .XML and .PROJ because all .PROJ files are MSBuild XML files). The great thing about TFS is that it will use the tool registered with the given file type so you can use multiple tools depending on the extension.

The additional command line instructions which might be needed are at

SearchUse SHIFT+ENTER to open the menu (new window).
11/20/2009 11:41 AMApproved

The behaviour described to me is that during the rebuilding of the index, search first returns nothing, and then is returning partial result sets, and then returns all of the expected results.

I have never encountered this, and I have not be able to discover anything about it on the web.

Let me clarify what should be happening, since there seems to be some confusion over server roles.

Each server can be configured to have the index (also called crawler) role and/or the query (also called search) role (or both or neither).

Unlike the Windows SharePoint Services 3.0 search role, content indexes produced by the Office SharePoint Server 2007 index role are continuously propagated to all servers that host the query role in a farm.

In small farms, typically one WFE is configured to index, and the other WFE is configured for query.

Note that this is different from WSS, where it is common to put index&query onto an app server - in WSS, you cannot split the roles. Note that in MOSS, putting these roles on a seperate server will introduce latency, since the way the services work constantly needs to 'chat' with a WFE.

Since we are talking about search, here are my search-related 'tips'

1. Make sure that 'Crawl SharePoint content as HTTP pages' is set to NO - otherwise it is slow, and loses permissions info.

2. Stagger the crawl schedule so the load is distributed over time

3. Use File Groups to separate the query and crawl tables in the search database

4. For details on perf tweaks, see

5. You can create and manage content indexes only if you have enabled advanced search administration mode.

6. Propagation troubleshooting:

Serena ComposerUse SHIFT+ENTER to open the menu (new window).
11/20/2009 11:28 AMApproved
SCOM and SharePointUse SHIFT+ENTER to open the menu (new window).
11/20/2009 11:23 AMApproved

using SCOM

the sharepoint pieces that came in the box on SCOM 2007 have been updated - see

One way to achiece our objective would be to design a report in SCOM, and then surface it through SharePoint - here is a how to
This needs SCOM SP1

if the standard parts aren't sufficient, there is a Service Level Dashboard Accelerator we could leverage - looks very cool - its at  

No "custom tab" - help!Use SHIFT+ENTER to open the menu (new window).
9/10/2009 4:24 PMApproved

This is a known bug with SharePoint, although Microsoft have never addressed it.

It seems the UI method for publishing a template can sometimes fail.


Most users report that the following resolves the issue:

stsadm -o addtemplate -filename c:\path\myfilename.stp -title "Template Title"

You will then need to restart IIS with iisreset

My DASL FilterUse SHIFT+ENTER to open the menu (new window).
6/19/2009 3:42 PMApproved
My prefered view of the Inbox involves all items with non-complete followup flags AND all unread items.
("urn:schemas:httpmail:read" = 0 OR (NOT("" = 1) AND NOT("" IS NULL)))
Selection criteria for BAUse SHIFT+ENTER to open the menu (new window).
6/16/2009 9:36 AMApproved

Last year there was a great article on called the Six Secrets of Top Notch Business Analysts

·         They understand the specific business problem that software aims to solve.

·         They are diplomats, translators and negotiators. 

·         They can see the forest through the trees.

·         They understand technology's potential and its limitations.

·         They have credibility with business colleagues, often gained through previous work experience.

·         They are "people persons."

Ron Jeffries in a post on the XP Yahoo! Discussion board in 2004:Use SHIFT+ENTER to open the menu (new window).
1/28/2009 3:38 PMApproved
Right now this looks like a 200-point project. Based on our performance on other projects, with your intimate involvement in the project, a project of this size should take between 4 and 6 months. However, we will be showing you working software every two weeks, and we'll be ticking off these feature stories to your satisfaction. The good news is, if you are not satisfied, you can stop. The better news is that if you become satisfied before all the features are done, you can stop. The bad news is, you need to work with us to make it clear just what your satisfaction means. The best news is that whenever there are enough features working to make the program useful, you can ask us to prepare it for deployment, and we'll do that. As we go forward, we will see how fast we are progressing, and our estimate of the time needed will improve. In every case, you'll see what is going on, you'll see concrete evidence of useful software running the tests that you specify, and you'll know everything as soon as I know it.
Arguments for T&MUse SHIFT+ENTER to open the menu (new window).
1/28/2009 3:37 PMApproved
 Do you often find that your requirements change midstream, even after everything has been captured and analysed?  Would you lke to be able to change your mind about the features you have requested every two weeks?  Would you be willing to spend some time with us to help us better understand what you vision is for the implementation of these new features? How often during a previous project have you seen working software? How often would you like to in this project? Would you like us to show you that what we have accomplished by demonstrating the product to you every two weeks?  How should we mitigate against time constraints? Or against potentially building the wrong thing? Do you have a prioritized list of work that could help us with this?  People typically roll on and off projects, which can cause delays as information needs to be transferred and absorbed (and can sometimes be lost). How would you like us to handle this?  Would you like the right to close or cancel the project at any time with only 30 days notice? They usually still won't do it. So chunk it – and make sure you understand the acceptance criteria…
Elevator statement format for dummies:Use SHIFT+ENTER to open the menu (new window).
1/28/2009 3:36 PMApproved
For (target customer) Who (state need or opportunity) The (product name) is a (product category) That (key benefit, compelling reason to buy) Unlike (primary competitor) Our product (primary differentiation)
Notes from a hiring managerUse SHIFT+ENTER to open the menu (new window).
12/7/2008 9:35 AMApproved
I was recently helping to recruit 7 senior contract roles for an agency. In this note I want to write about the process of shortlisting candidates for interview. Note this was for senior contractor positions, and our standard was to not recruit if we couldn't find a really good candidate - those roles might be referred to an outsourcer.
We had 103 applications for these 7 roles - 39 applicants for one of the roles.  We wanted to interview 3 or 4 people for each role (so perhaps 25 of the 103).  There are no real rules on how this shortlisting process should be done.  This was not about listing all those who were suitable, or not suitable. This was about selecting the top few to consider in detail: the aim was to knock people out.  I skimmed through all of the CVs to shortlist the candidates. Note that word 'skimmed' - in less than 60 seconds I formed a view as to whether this was a strong or weak candidate by reviewing their EXPERIENCE (ie CV).  I spent another 2-3 minutes skimming the rest of their application confirming that assumption (changed it in 5 cases out of 103 candidates) and jotting down a few words about the candidate.
Looking back over those comments, there were some patterns:
  • 'blah' - this is just another applicant: probably suitable, but with nothing special to recommend her
  • 'not senior enough' - this was a relative judgement against the other candidates
  • agencies they had worked for - interestingly, this was usually next to people that did NOT make the cut
  • special highly relevent experience - these people usually DID make the cut
  1. Figure out what the main concerns of the role (and hence the recruiters) are - if the information in the advertisement isn't sufficient for you to understand what they want the role to do, ASK.  If your application (statement against criteria OR experience) doesn't quite fit, you may get excluded if there are enough other candidates who look like a better fit.  Unless there is some reason to think a generalist would have appeal, DONT use a general CV that says you are a PM/BA/TestManager/Executive - many organisations want people who will do their job, and not keep trying to do someone else's job.  Concentrate the CV on RELEVENT experience.  If they want someone for a DEVELOPMENT project, you need to present the RELEVENT experience in your CV - if your experience all looks hardware related, you might not make the shortlist. (and vice versa).  Very few candidates presented a customised CV - and those candidates did better.
  2. Assume all your competitors will satisfy the criterion. Now SET YOURSELF APART.
  3. Very few of the applicants spotted the patterns in the criteria, and went the extra step to create a gestalt - those few tended to do better in the shortlisting process.
[One recruiter asked a few questions about the roles, but NO candidates did.]
1. While these issues weren't sufficient on their own to remove a candidate from consideration, they hurt those candidates:
  • Statements against some other set of selection criteria, presumably done for a different role
  • Candidates who typed out an industry acronym and got it wrong
  • Spelling mistakes and grammatical errors which Microsoft Word identified
  • CVs that looked completely irrelevent
  • If your CV looks like no one ever renewed you, you need to fix that
  • If your CV looks like you used to do this kind of work but haven't been for several years, try to draw the thread through... Otherwise people with more recent experience may be preferred.
2. Reputation is everything: at this stage, if the wife of the person sitting next to the reviewer worked with a candidate and didn't like themthat candidate won't make the cut.  The slightest hint of negativity and you are out - after all, there are these shiny strangers with no mud against their name to consider.
Saving sites from a publishing siteUse SHIFT+ENTER to open the menu (new window).
6/16/2008 2:47 AMApproved
The following manual cludge has been doing the rounds:
"So just to make it clear.  If your subsite is
You would go to
This is unsupported, and generally wont work to move from one site to another: too many interdependancies in publishing sites that arent present in the actual subsite.
However, the Import/Export command (from commandline or Sharepoint Designer) can help is you want to ove the whole thing.
MCMS to MOSS2007 MigrationUse SHIFT+ENTER to open the menu (new window).
6/8/2008 4:02 PMApproved
Make sure everything you want to migrate is checked in.

Cleanup MSMCS Site Content
Principal: when stuff would convert into strutures we dont want moving forward, better to remove from MCMS rather than converting and deleting. This is because the deletion process for some items in MOSS is long, and for some items leaves residue that may have later impacts (especially for Content Types, Site Columns, Page Layouts, etc), especialy if an item with the same name was later created.
  1. Non-unique leaf names - identified by the migration tool
  2. Amend "onezies" (items in a folder, not indiv folders each with a default page)
  3. Identify and remove hidden/special channels: most will be implemented as custom lists
  4. Remove templates that arent used much, and their postings - reenter the data (much easier than having to remove the sites and custom content types they create - especially since leftovers would abound)
  5. Identify all uses of XML Placeholders - may need CustomFieldContols, of XML Web Parts <McmsXmlPlaceholder>  Stuff that becomes defaults: Navigation, Search, Summary Pages, Deployment Scripts, Form login screen
  6. Images and other resource gallery items: you may wish to move "up" the heirarchy so they are shared (by default wil migrate a copy for each one). Note that these get named with their MCMS guid, which is ugly: perhaps record the guid-name correlation before transition...
  7. Check all names for invalid codes

These items… 
The following 
Site URLs        
 \ / : * ? " < > | # { } % ~ &
Site names  
 \ / : * ? " < > | # { } % ~ &
Site names  
Start with   _
Site names  
Start or end with
Site names 
Folder names 
\ / : * ? " < > | # { } % ~ &
File names 
\ / : * ? " < > | # { } % ~ &
File names and folder names
End name
.files; _files; _file; _failid; _fails
Page definition names
\ / : * ? " < > | # { } TAB
Page definition names
end with
Page definition names
Page definition names
 longer than 128 characters
Placeholder names 
non-alphanumeric characters
User names (for forms authentication)
Rights group names 
/ \ [ ] : | < > + = ; , ? * ' " @
User role names  
/ \ [ ] : | < > + = ; , ? * ' " @ 

Prepare Destination
Must have an empty site - it wont overwrite

  1. Migrate the accounts and groups
    (note impact of forms-based authentication)
  2. Content approval workflows in MOSS use Infopath forms
  3. Packaging MOSS features and solutions
  4. Create and manage a Minimal Master? []
  5. Use field value controls and edit mode panels to tweak the []
  6. Enable output caching?
Need to see what happens when a virus comes into your program?Use SHIFT+ENTER to open the menu (new window).
5/24/2008 1:49 PMApproved
There is a "test" file that isn't really a virus, but which virus scanners are all taught to recognize as one - see
Databse use from the BRCUse SHIFT+ENTER to open the menu (new window).
5/20/2008 4:58 PMApproved

As you know, I use an XML helper class that ships with BTS2006 to add nodes to the XML to record each rule violation. Surprisingly, it wouldn’t work on the new VM image until I altered the registry to add "StaticSupport"=dword:00000002 to [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\BusinessRules\3.0]


While investigating the error, I discovered a TechNet note we should bear in mind when we connect the rules into Hermes – apparently in the BRC I get a DataConnection object automatically built, but when we call the same policy from an orchestration by using the Call Rules shape, we don’t get any such automatic support.  We will need to create a DataConnection object in the orchestration and pass it as a parameter (or create a fact retriever component that asserts the DataConnection object, and configure the policy to use the fact retriever component).


The trick with those instructions is to notice the extra input parameter that appears in the call rules shape...


However, we were unable to get this to work reliably from an Orchestration, so we switched to a static class:


public static bool isItemInTable(string Table, string Column, object Item)
            SqlConnection cn = new SqlConnection("Data Source=.;Initial Catalog=SRSCodeLists;Integrated Security=SSPI");
            //select count(*) from [SRSCodeLists].[dbo].[Priority] where [Value] like 'CRASH';
            string queryString = ("select count(*) from [SRSCodeLists].[dbo].[" + Table + "] where [" + Column + "] like '" + Item.ToString() + "'");
            SqlCommand myCommand = new SqlCommand(queryString, cn);
            Int32 mYcOunt = (Int32)myCommand.ExecuteScalar();
            bool ReturnValue = false;
            if (mYcOunt > 0)
                ReturnValue = true;
                ReturnValue = false;
            //SqlDataReader reader = command.ExecuteReader();
            //while (reader.Read())
            //    Console.WriteLine(String.Format("{0}, {1}",
            //        reader[0], reader[1]));
            return ReturnValue;

Network speed and latencyUse SHIFT+ENTER to open the menu (new window).
5/15/2008 2:48 PMApproved
I got 1100 at work
SteveJ's wireless got 500
Network tools like ping tests and traceroute measure latency by determining the time it takes a given network packet to travel from source to destination and back, the so-called round-trip time. Round-trip time is not the only way to specify latency, but it is the most common.
On DSL or cable Internet connections, latencies of less than 100 milliseconds (ms) are typical and less than 25 ms desired. Satellite Internet connections, on the other hand, average 500 ms or higher latency.
Determine version of SQL Server installedUse SHIFT+ENTER to open the menu (new window).
4/30/2008 11:09 AMApproved
SQL2005 and 2000
SELECT  SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'), SERVERPROPERTY ('edition')
The following table lists the Sqlservr.exe version number:
SQL Server 2005 Service Pack 1
SQL Server 2005 Service Pack 2 
SQL Server 2000 SP1
SQL Server 2000 SP2
SQL Server 2000 SP3
SQL Server 2000 SP3a
SQL Server 2000 SP4

SQL7 and 6.5
Version Number 
Service Pack
SQL Server 7.0 Service Pack 4 (SP4)
SQL Server 7.0 Service Pack 3 (SP3)
SQL Server 7.0 Service Pack 2 (SP2)
SQL Server 7.0 Service Pack 1 (SP1)
SQL Server 7.0 RTM
6.5 SP 5a Update
6.5 Service Pack 5a
6.5 SP5
6.5 SP4
6.5 SP3
6.5 SP2
6.5 SP1
6.5 RTM
Determine version of BizTalk installedUse SHIFT+ENTER to open the menu (new window).
4/29/2008 12:23 PMApproved
The 'ProductVersion' key located in 'HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\BizTalk Server\3.0'
Product name Service pack Version number
BizTalk Server 2004 - 3.0.4902.0
BizTalk Server 2004 SP1 3.0.6070.0
BizTalk Server 2004 SP2 3.0.7405.0
BizTalk Server 2006 - 3.5.1602.0
BizTalk Server 2006 R2 - 3.6.1404.0
Calling custom actions from the Business Rule ComposerUse SHIFT+ENTER to open the menu (new window).
4/16/2008 12:26 PMApproved

For this to work, we are required to implement IFactCreator (I know, we aren’t creating a custom fact creator, but you need this nevertheless) – this is what creates a custom array to hold the results and passes them back into the BRE.  Without this step, code will execute, but return values will not go into the BRE (exactly the behaviour we observed)


using System;
using Microsoft.RuleEngine;
 /// Summary description for FactCreator.
 public class FactCreator : IFactCreator
  public object[] CreateFacts(RuleSetInfo rsi)
   object[] o = new object[1];
   o[0] = new <CUSTOMPROJECT>.<METHOD>();

// ie. Cubido.BizTalk.Tools.DateTool()
   return o;
  public Type[] GetFactTypes(RuleSetInfo rsi)
   Type[] t = new Type[1];
   t[0] = new <CUSTOMPROJECT>.<METHOD>();GetType();
   return t;




The assemblies have to be in the GAC to be visible to the BRComposer.


To test such a component within the Business Rule Composer you have to add the Assembly with the class implementing the IFactCreator interface to the fact creators in the Test Policy dialog.


If the Rule is accessed from an orchestration, the class which implements the custom function has to be assigned to a variable (Variables -> New Variable) and provided as parameter in the Call Rule shape. 

Making changes to a deployed orchestrationUse SHIFT+ENTER to open the menu (new window).
4/16/2008 12:01 PMApproved
In BTS Admin, delete any suspended instances
In VS.Net, make the desired changes, build and deploy
In BTS Admin, restart the host instance, and refresh the application
Making changes to a deployed business ruleUse SHIFT+ENTER to open the menu (new window).
4/16/2008 11:59 AMApproved
Create a new version, and copy the old rule
Undeploy the old version, then delete it
Make changes to the new version, then save/publish/deploy
Although it should not be necessary, if you start to get uncaught exceptions, restart the Host Instance in BTS Admin
Making changes to a published vocabularyUse SHIFT+ENTER to open the menu (new window).
4/16/2008 11:57 AMApproved

To use an element from a vocabulary in a rule, the vocabulary must be published. But once it is published, you can’t make changes.  This is a pain during development, especially since a new vocabulary would need to be completely typed out, and then every reference to the old vocab would need to be manually changed to a reference to the new vocab inside each of the rules.

Fortunately, there is a workaround.

1) When you want to make a change, run the following SQL, where SRS is the name of my vocabulary
UPDATE [BizTalkRuleEngineDb].[dbo].[re_vocabulary]
SET    nStatus = 0
WHERE  (strName = N'SRS');

2) Then in the BRE, refresh the vocab.

3) Now make all desired changes, and SAVE.

4) Back in SQL, run
UPDATE [BizTalkRuleEngineDb].[dbo].[re_vocabulary]
SET    nStatus = 1
WHERE  (strName = N'SRS');

5) And back in the BRE, refresh again.

Now you can leave all your old references alone, and use the new/modified ones.

 [If you make a mistake and try to re-publish the rules, you will get a duplicate key error message: no harm done.]

Adding nodes from the Business Rule ComposerUse SHIFT+ENTER to open the menu (new window).
4/16/2008 11:55 AMApproved

A lot of what we thought was true in this space was a limitation of BTS2004.  Microsoft addressed the problem in BTS2006, but didn’t tell anyone.

In the Fact Explorer of the Business Rule Composer go to the .Net Classes tab, and browse to “Microsoft.RuleEngine”

The methods we want are in the class “XMLHelper”

 The following syntax was successful:
XMLHelper.AddNodeWithValue (Training1.person:/Person, . , NodeName, NodeValue)

 It produced the following output file:
<?xml version="1.0" encoding="utf-8"?> <Person><Name>wdfgdfgdfgdg</Name><Age>39</Age><NodeName>NodeValue</NodeName></Person>


  • The first parameter was done by dragging and dropping the parent node of the XML Schema – While the parameter will accept either, you want the node, not the schema itself.
  • The second parameter is a dot – standing for “put it in the root”
  • The third and fourth parameters are strings
  • Careful that you are trying to create a node where you can: trying to put one under an attribute obviously wont work
  • You can drag and drop a value from the xml instead of any string
  • If you get an uncaught exception, restart the application in BTSAdmin - it gets confused sometimes
Colour visionUse SHIFT+ENTER to open the menu (new window).
3/3/2008 9:16 AMApproved
10% of men have impaired colour vision - will your web page still be usable by them?
Website for simulating what colour-vision impaired people will see of your webpage -
Central Govt PoliciesUse SHIFT+ENTER to open the menu (new window).
9/25/2007 11:01 PMApproved

...efficiently and effectively adopt a whole of government approach to managing processes, policy and program development and service delivery.

ANAO Better Practice Guides

ANAO Better Practice Guides (BPGs) aim to improve public administration by providing a mechanism whereby better practices employed in agencies are recognised and promulgated to all Australian Government entities. This can involve examining practices in the public or private sectors in Australia or overseas. ANAO’s emphasis is to identify, assess and articulate good practice from the Agency’s knowledge and understanding of the public sector, as well as areas where improvements are warranted.

Australian Public Service Commission Building Capability

The Australian Public Service Commission provides agencies with tools, guidance and resources to work more effectively. Resources provided include HR capability modelling, recruitment kits, management resources and leadership frameworks and systems.

Better Practice Centre @ AGIMO

The AGIMO Better Practice Program facilitates information sharing and improved access to government information and services through a range of e-Government initiatives in collaboration with Australian Government agencies. It includes better practice guides on Internet/e-Government related processes and host contributions from government agencies. The aim is to promote excellence in e-Government.

Better practice guide on governance arrangements in government

The aim of this guide is to promote consistency in governance arrangements of Australian Government bodies, while reinforcing the principles set out in the Review of Corporate Governance of Statutory Authorities and Office Holders. This is in line with the Department of Finance and Administration’s ongoing role of promoting better practice governance of Australian Government bodies generally. The policies set out in the document provide a strong platform for informed discussion when officials consult with, or seek advice from, central agencies on the merits of alternative structures for Australian Government bodies.

Business Cost Calculator

Business Cost Calculator is administered by the Office of Best Practice Regulation (OBPR, formerly the Office of Regulation Review). It is a tool for estimating the compliance costs of regulation. It provides an automated and standard process for quantifying compliance costs of regulation on business, using an activity-based costing methodology.

Cabinet Implementation Unit

The Cabinet Implementation Unit helps departments and agencies improve the way they develop and implement the Australian Government’s decisions, and how they report on the measures being implemented. The Unit’s website provides practical information to help public servants and those who work with them on the implementation of projects and the management of programs, and the implementation and delivery of initiatives and ongoing programs. Specific areas where the Unit can help include preparing new policy proposals, developing implementation plans for new measures or programs and support for better project and program management.

Delivering Australian Government Services: Access and Distribution Strategy

The Access and Distribution Strategy sets out a high-level framework that promotes an environment in which agencies are able to integrate and share services and information across a range of channels. To achieve this aim, the Strategy advocates that agencies build whole of government capacity, develop technical and information interoperability, and take a strategic approach to the use of service delivery channels.

Delivering Australian Government Services: Managing Multiple Channels

Managing Multiple Channels recognises that Australian Government agencies deliver services to customers/citizens through a variety of channels (shop fronts, call centres, websites, etc) and outlines a process for aligning customer needs, services and channel provision.

Finance and Budget process advice

The Financial Framework output develops and maintains the financial regulatory framework for the general government sector, focusing on effective financial governance, financial management and accountability. The Budget Advice output supports Australian Government agencies through advice on the outcomes and outputs framework and performance management system.

ICT Business Case Guidance

Rigorous business case planning ensures that ICT investment across government is well planned and managed. Robust business cases reduce the risk of time and cost overruns and of projects not achieving anticipated benefits. Better business cases strengthen the quality of strategic alignment, project planning, financial estimates, and cost benefit and options analysis. The ICT Business Case Guide and Tools helps agencies to develop business cases with comprehensive cost benefit analysis and more detailed project planning. Agencies trialing the ICT Business Case Guide and Tools over the past year report that benefits include the development of a common language around investment and business planning.

Implementing machinery of government changes

Guide to implementing machinery of government changes provides practical guidance to help agencies implement changes. It relates primarily to moves between Australian Public Service (APS) agencies, subject to the Public Service Act 1999 and the Financial Management and Accountability Act 1997, but may also provide useful guidance for moves into, or out of, the APS. The Guide provides an overview of the MoG process, principles and approaches for planning and implementing MoG changes, guidance on financial management and people management and advice on managing physical relocations, records and taxation.

Outcomes and Outputs Framework

The outcomes and output framework of the Department of Finance and Administration provides guidance to departments and agencies for structuring corporate governance and management arrangements and for reporting on planned and actual performance.

Reducing Red Tape in the APS

This report focuses on internal and the whole of government regulatory and administrative requirements of the Australian Government. It sets out a principles-based framework for the review of existing requirements and for the scrutiny of proposals for new requirements, with a view to reducing red tape. The report begins with an overview of the framework for design and review of requirements and a discussion of the main elements of the process.

Source IT

This is a site for Australian Government agencies that are dealing with Information and Communication Technology (ICT) sourcing issues.


2006 e-Government Strategy

The Australian Government released the first version of e-Government strategy, Better Services, Better Government in 2002. Since then, much has been done to achieve the vision outlined in that document and there is no doubt that Australians now have ‘better services’ and ‘better government’. The 2005-06 e-Government strategy, Responsive Government: A New Service Agenda, builds on the momentum and achievements of the first strategy, taking into account lessons learned to deliver an even more coordinated and citizen-driven focus to the government’s e-Government initiatives. It is about strategically applying ICT to improve and reform government processes. The strategy recognises the devolved nature of the Australian Government and the importance of supporting cooperation and sharing to realise the potential of e-Government.

The Australian Government Architecture

The Australian Government Architecture (AGA) aims to assist in the delivery of more consistent and cohesive service to citizens and to support the more cost-effective delivery of ICT services by government, providing a framework that:

• provides a common language for agencies involved in the delivery of cross-agency services

• supports the identification of duplicate, re-usable and sharable services

• provides a basis for the objective review of ICT investment by government

• enables more cost-effective and timely delivery of ICT services through a repository of standards, principles and templates which assist in the design and delivery of ICT capability and, in turn, business services to citizens.

Australian Government Information Interoperability Framework

The Framework provides practical guidance for achieving the successful transfer of information across agency boundaries. The Information Interoperability Framework aims to assist agencies to improve their capacity for information management in support of information exchange.

Australian Government Technical Interoperability Framework

The Australian Government Technical Interoperability Framework was developed by the Interoperability Framework Working Group (IFWG), a reference group of senior technical architects nominated by the Chief Information Officers’ Committee (CIOC) and supported by AGIMO. The latest version of the Framework responds to developments in the ICT industry supporting business and government interconnectivity. The Framework specifies a conceptual model and agreed technical standards which support collaboration between Australian Government agencies. Adopting common technical protocols and standards will ensure government ICT systems interoperate in a trusted way with partners from industry and other governments. Interoperability will improve efficiency, reduce costs to business and government and will support agencies’ capacity to respond to public policy developments.

Connected Government: Agencies Working Together

This good practice guide is derived from the MAC Report: Connecting Government: Whole of Government Responses to Australia’s Priority Challenges, Good Practice Guides and gives practical advice on working whole of government. This information can be used to work through whole of government projects from determining how to structure a group working across departments to managing emergency responses. It will also be useful for those already working on whole of government initiatives.


GovDex is a resource developed by government agencies to facilitate business process collaboration across policy portfolios, such as Taxation or Human Services, and across administrative jurisdictions. GovDex promotes effective and efficient information sharing, which is core to achieving collaboration. It provides governance, tools, methods and re-usable technical components which government agencies can use to assemble and deploy information services on their different technology platforms. GovDex is a key enabler to a whole of government approach to IT service development and deployment.

Guidelines for Establishing and Facilitating Communities of Practice

A community of practice is a group of peers with a common sense of purpose who agree to work together to share information, build knowledge, develop expertise and solve problems. Communities of practice are characterised by the willing participation of members and their ongoing interaction in developing a chosen area of practice. These guidelines provide tips on establishing and facilitating communities of practice.

The National Service Improvement Framework

The National Service Improvement Framework (NSIF) provides a series of re-usable documents, tools and templates to facilitate collaboration between government agencies. The National Service Improvement Framework aims to facilitate projects requiring collaboration within and between governments at all levels.

The National Service Improvement Framework website provides a knowledge base that will assist Local, State/Territory and Australian Government departments and agencies in the effective implementation of cross-jurisdictional projects. The key objectives of the National Service Improvement Framework are:

• to increase citizen satisfaction in dealing with government

• to improve the effectiveness and efficiency of government

• to build the capacity for cross-jurisdictional collaboration.

Working Together: Principles and Practices to Guide the Australian Public Service

The Management Advisory Committee has produced a best practice checklist based on the report, Connecting Government. The aim of the Guide is to assist agencies to achieve effective outcomes from work conducted jointly. It provides collaborating agencies with a checklist of responsibilities.

Office 2007 Product CapabilitiesUse SHIFT+ENTER to open the menu (new window).
8/31/2007 1:37 PMApproved

Microsoft Office Forms Server 2007


  1. Design forms in InfoPath, and have them displayed in any-old-browser, including mobile devices.
  2. The backend is XML, with strong integration to SharePoint workflow and BizTalk Server.
  3. Web services integration is a snap.
  4. Business users can design and publish forms (with or without management control from an IT admin)
  5. Form upgrades are straightforward, based on an XML transform.
  6. The server supports managed code execution on the server, allowing custom development.



  • Opens the use of InfoPath as a designer even when end users won't have it (including to the public)
  • Validation is performed in the client browser, reducing round trips etc





Pasted from <>


Microsoft Office Groove 2007


Comes with Client and Server.

  1. User creates a Groove Workspace, and invites colleagues
  2. Workspace has shared files, discusions, forms, etc - ON EACH USERS COMPUTER
  3. Automatically syncs up when the machines are connected to the network (real fast, and under the hood)
  4. Provides presence awareness, alerts and real time tools
  5. Integrates with Sharepoint, InfoPath, etc.



  • Consider an executive using groove just to sync laptop with sharepoint site
  • Distributed teams getting presenace awareness even if that feature isn't otherwise supproted
  • Syncing very large files (ie. coping with poor networks) - using groove as a replication service






Microsoft Office PerformancePoint Server 2007


Business users can control business rules, but there is central management/audit

Uses "models", allowing to push up, down or across particular ways of looking at data

Personalised scorecards (dashboards) [was Business Scorecard Manager 2005]


Advanced analytics (point and click // drag and drop) - pivot charts on steroids (uses ProClarity tech)


Planning: "robust planning, budgeting, forecasting, consolidation, and financial reporting capabilities." "provides operating units and departments with the flexibility to plan based on their unique business models while synchronizing their plans, budget, and forecasts with those at the corporate level."

"integrates data from multiple enterprise systems into a single, consolidated, and current repository of financial information. This integration ensures that all plans, budgets, forecasts, and statutory and management reports are built using the most consistent and current financial information."

  • Input forms created by analysts and completed by managers
  • Outlook and sharepoint for workflow/tasks/etc
  • SQL for security, robustness, etc


Microsoft Office Project Portfolio Server 2007


Bi-directional gateway with Project Server

Configurable workflows, but comes with best preactice templates


Templates for data gathering


Priority Optomizer Module provides best practices for deriving prioritization scores, and sophisticated optimization algorithms


Portfolio Dashboard Manager provides portfolio scorecards with drill down


Microsoft Office Enterprise Project Management Solution


  • Project Web Access to prevent everyone needing the fat client
  • Proposals capability
  • For simple stuff, integrates with Sharepoint tasks
  • Enterprise Templates and Project Guides to establish best practices and make them easy to reuse
  • Project Workspaces are an integration of Sharepoint capability with project web access - very cool
  • Timesheet capabilities enhanced
  • Offline caching capabilities
  • Manage and track corss-project dependancies
  • Now based on .Net. Scheduling engine now on the server. Event model nor workflow enabled. Full API now exposed.


Microsoft Office SharePoint Server


Pasted from <>


  • Manage content and workflow processes, including rights management
  • Web content management
  • Forms services.
  • Centralised Report Center sites - a bi portal
  • SharePoint Enterprise Search / Search Center
  • Excel Services
  • Connectors to SAP and Siebel. Business Data Catalog for LOB apps. Business data catalog extends search into LOB apps.
  • Simplify internal and external collaboration:
  • Direct offline experience and Groove.
  • My Sites personalised experience
  • Built to extension into enterprise apps.
  • "Robust system monitoring, usage tracking, and monitoring tools"





Results-oriented user interface. Office Access 2007 has been updated with a fresh look that makes it easier to create, modify, and work with database solutions. The new results-oriented user interface (UI) is context-sensitive and optimized for efficiency and discoverability. While nearly 1,000 commands are available, the new UI displays only those that are relevant to the task you are performing at any given moment. In addition, tabbed windows view, a new status bar, new scroll bars, and a new title bar give applications built on Office Access 2007 a very modern look.


The new UI is limited to Access 2007, Excel 2007, Outlook 2007, PowerPoint 2007 and Word 2007


In Office 2007, Microsoft has introduced a new set of XML formats, in cooperation with ECMA (so these are an open standard, not a Microsoft-only format). The ECMA working draft is at version 1.5, and is expected to be approved by the ECMA General Assembly this month.

The new formats take advantage of a new standard in packaging information in a ZIP file, and describe metadata, parts, and relationships. This creates significant advantages over the old binary file format, and is much easier to work with programmatically. The formats can be freely implemented by multiple applications on multiple platforms, and organisations like Apple, Intel and Novell are already implementing them (these three were on the ECMA working group, along with the British Library and the US Library of Congress, and half a dozen others). Documents can easily be created in this format on the server directly from code, and without requiring Office.


  • » Access
    • integration with Sharepoint - including putting an access file into a document library; accessing sharepoint data from access, and even building interrated applications levering Sharepoint's workflow.
    • lots of pre-built solutions to be customised
    • Strong integration with Outlook - use email to gather data for Access projects (or infopath); strong contact integration - incl email and RSS notifications!


  • » Excel
    • Expand sheet size to 1 million rows x 16000 columns
    • More OLAP
    • Redesigned charting engine
    • Excel Services dynamically renders an Office Excel 2007 spreadsheet as HTML so others can access a spreadsheet stored Office SharePoint Server 2007 within any Web browser. Because of the high degree of fidelity with the Office Excel 2007 client, people can use Excel Services to navigate, sort, filter, input parameters, and interact with PivotTable views, all within their Web browser.
    • Excel Services Web services application programming interface (API) to integrate server calculation of Office Excel 2007 files into other applications


  • The Microsoft® Office and Server & Tools teams are proud to announce the introduction of two great new tools for application building and Web authoring in 2006:

    • Microsoft® Office SharePoint® Designer 2007: Automate your business processes and build efficient applications on top of the SharePoint platform, and tailor your SharePoint® site to your needs in an IT-managed environment.
    • Microsoft® Expression™ Web Designer: Take advantage of the best of dynamic Web site design, enabling you to design, develop, and maintain exceptional standards-based Web sites. Expression also comes in flavours "Graphics Designer" and "Interactive Designer"


  • » Groove
    • Discussed above


    • Infopath-buil;t forms can be completed in a brower, or in Outlook, or in mobile devices
    • Rich client-side validation
    • XML-based version support
    • Workflow with sharepoint


  • » OneNote
    • gather and organize text, pictures, digital handwriting, audio and video recordings, and more—all in one digital notebook
    • Powerful search capabilities can help you locate information from text within pictures or from spoken words in audio and video recordings. And easy-to-use collaborative tools help teams work together with all of this information in shared notebooks, whether online or offline.
    • APIS for transferring data into business systems…
    • Link notes to contacts in OL // create and manage tasks


  • » Outlook
    • integrated Instant Search - includes contents of attachments
    • Create and suibscribe to Internet calendars
    • Supprot for RSS
    • Send clanedar snapshots
    • Integration with WSS
    • Send text messages through OL Mobile Servuice (incl sending your calenadr to your phone)
    • Cusomtise and send electornic business cards
    • Exchange:
      • Antispam/antiphising
      • "Managed folders" - retention, archive, etc etc
      • "Email postmark"

  • » Project (STD)
    • Data driven diagrams created in Visio
    • Emplates for visio and excel
    • Task driver analysis
    • Cost resources
  • » Publisher
  • » SharePoint Designer
  • » Visio
    • PivotDiagram, Value Stream Map, and ITIL (Information Technology Infrastructure Library) templates
    • Themes
    • New - dynamic - workflow shapes
    • Connect data into diagrams
    • Data graphics / pivot diagrams (drill down!)
  • » Word




Workflow and Event-driven process chain from Visio 2007




Sample Graphical Report produced in Visio: the data updates!



What goes into the SOA Enabling InfrastructureUse SHIFT+ENTER to open the menu (new window).
8/24/2007 4:28 PMApproved
Each vendor has a different (and self serving) way of dividing this stuff up, which makes it hard to THINK, never mind compare.  Here is my version:

1. Interface/Presentation Tools [Web, Office]

2. Application Server (incl database) [Windows XP and 2003, Exchange 2003, SQL Server 2000 and 2005, Sharepoint 2007, Project 2007, SAP 4.6, Cognos]

3. Development tools [Cool:Gen, VS.Net]

4. Repository: SHORT TERM: Sharepoint library [Arch+CoE builds] Later, bigger, better, more professional.

  • Design time policy library
  • Business taxonomy library
  • Performance data
  • Dashboards?

5. Authentication Gateway SHORT TERM: Build on ADAM - PROJECT


  • Service creation/publishing
  • Approval for use and design time governance
  • Discovery
  • Subscription for changes
  • Register data schemas/UDDI Data store
  • Link to code in source code repository
  • Link to validation and verification data in test management repository

7. Message Broker (EAI and B2B: transport, transform; including Monitoring of Messaging) SHORT TERM: BizTalk Server - PROJECT

8. Workflow/Orchestration SHORT TERM: Investigate how we would use, using BizTalk Server, Windows Workflow Foundation, and Sharepoint as a prototyping environments [Arch+CoE investigates]

9. Business Rules Engine: SHORT TERM: Investigate how we would use, using BizTalk Server as a prototyping environment [Arch+CoE investigates]

10. Management

  • Deployment
  • Health monitoring
  • Fault management

11. Policy Manager

  • Automatically enforce policy standands at publish
  • Run time policy lookup
  • Run time policy enforcement

12. Other

  • Analytics/Prediction
  • Complex Event Processing
Below are a couple pages from Microsoft best practice talking about SHADOWINGUse SHIFT+ENTER to open the menu (new window).
8/24/2007 4:16 PMApproved

We might consider adopting this sort of terminology… See what you think, and we can discuss.--------------

To create an agile architecture, MSF utilizes shadowing. A shadow is architecture for the functionality to be completed in the iteration. The shadow leads the working code at the beginning of iteration as the architects get out in front of the development for the iteration. During this time, the architecture and the working code are not in sync. This shadow communicates any re-architecting or redesign that needs to occur to keep the code base from becoming a stove pipe, spaghetti code, or one of the many other architectural anti-patterns.

As the pieces of the leading shadow are implemented, the architecture begins to reflect the working code base. The original parts of the system that were architected but not implemented now become implemented. When the architecture represents the working code, we call the shadow a trailing shadow. As the sun sets on the iteration, the leading shadow should be gone and replaced strictly by trailing shadow. The trailing shadow is an accumulation of the architectures over all the iterations.

To keep architecture from becoming too detailed, we recommend that it be focused at the component and deployment levels. For example, a smart client system for generating budget information may consist of a Windows client and a number of Web Services. Each of these Web services, the underlying database server, and the client itself would be components in this model. Remaining at the component level keeps architects from becoming the police of low-level design, although it never hurts to get tips from a more experienced developer.

The Microsoft terminology for one of these deployable components such as a Web service or database server is an application. One of the chief tools for the MSF architect is the application diagram, the equivalent of the component diagram in the Unified Modeling Language. Since the application diagram focuses on more concrete entities such as a Windows application, ASP.NET Web service, or external database, more system-level detail can be provided.

Shadowing is applied at the component or applications level. A shadow application initially communicates a desired change in the component-level behavior of a system. Shadow applications become invaluable when multiple teams are trying to coordinate work across multiple components. Changes can be made without affecting the code base until the architecture is ready to be implemented. Next, the code is generated or written for the shadow and the leading shadow is removed and replaced with a trailing shadow.

The planning process for creating shadow applications is similar to the agile pattern used to partition and plan the development work for the system. New architecture tasks are created at the beginning of the iteration when any structural changes need to be made to the architecture to accommodate the new scenarios or quality-of-service requirements. Architecture tasks are like the development or coding tasks that are used to divide the scenarios into the lower-level pieces that can be assigned to a single developer. However, they pertain to the architectural functions that must be performed to keep the system from entropy.

As a result of these tasks, the architect will add the endpoints or interfaces to the shadow applications to reflect the needs of the new requirements. These endpoints can be validated to ensure that the components such as Web services will work together properly in the context of the deployment environment. The endpoints of these applications can be connected to show how the components interact. Each application may be distributed on a separate machine or clustered to work together on a single machine.

As the development team becomes ready to implement the scenarios, the endpoints are deleted from the shadow applications and added to the application that represents working code. Unit tests are created for each side of the component to ensure that the proper functionality is provided and unit-tested. Finally, working code is written for these new endpoints.

At the end of the iteration, all of the proxy or unimplemented endpoints should be gone. In other words, all of the architecture should be translated into working code. The architectural model is not divorced from the working system, but rather is a reflection of it. This makes the documentation for the component model match the working system. Unit tests should be in place to make sure that the interfaces continue to work as new functionality is added.

Shadow applications provide many advantages. They keep the high-level design of the components in the system consistent with the code base. They allow larger teams to define responsibilities in the context of an agile architecture. Shadow applications are used to track the building of functionality across component boundaries. In this way, they allow MSF for Agile Software Development to scale to larger, more complex projects.

Server accounts and securityUse SHIFT+ENTER to open the menu (new window).
8/24/2007 4:02 PMApproved
I have a few nitty-gritty recommendations - perhaps you can reverse engineer some guidelines or principles out of them:
a) Delete ALL the default accounts and groups, and create our own. (Putting something at the start of the Microsoft default names is OK IMHO - just don't use the Microsoft names.) 
b) IT Security need to work out an appropriate rights heirarchy: for example, perhaps only IT Security should be able to grant admin rights to a user or service account.  I think this is the 'restricted access' admin account mentioned below.  Please note that in general I think people should be able to READ whatever they like with normal accounts: it's making changes that should need extra privileges.
b1) By default, DBAs and Admins of production boxes SHOULD NOT have the ability to actually read or change the data - data updates get made by (business) users working through properly considered user interfaces with appropriate constraints and business logic.
b1-1) DBAs and admins may well need to have access to data in dev and test, where they figure out solutions. Those solutions can then be run by service accounts or applications in prod.  Thats why the data in dev and test should be representative without being confidential (ie. it should be mixed up so you can't connect the facts).  Every application should have a data plan for how they create and maintain suitable data in dev and test to make those environments realistic without being confidential.  Every application should review that plan (annually?) to ensure the data is still representative. [Testers will need some data that seems to make sense, so they can test...]
b1-2) Applications which are not adequate to obey this rule should be given a dispensation, and the risks managed.
b1-3) All of the above also applies to developers and managers, etc etc.
c) Admin rights over production boxes are NOT to be assigned to people's normal login account. Users with admin rights should have to explicitly use those rights - for example, by using the RUNAS command.  For the BEST way to use admin rights, see
    c1) Example: An agency I worked with have a username account and a username_A account. The _A accounts are the ones with admin rights.
    c2) A user will need a security clearance adequate for all data they could potentially access: that means domain admin rights bring with them a need for a Highly Protected clearance.  Admin rights to a particular server or group of servers would depend on the classification of the data on them, so I suppose admin rights for <public site> might not require any particular clearance, but for <confidential site> might require highly protected. (Applying the Principle of Least Privilege to User Accounts on Windows XP)
d) Service accounts should have the least access possible  - cf.  "Whenever possible, run services as the Local Service account, so that the account can only gain access to a single computer and not to the entire domain. Services that require authenticated network access might need to use the Network Service account, and you should deploy services that require broader implementation as the Local System account." 
e) If scalability doesn't matter, then data access should NORMALLY be done in the context of the user who will be viewing the data, allowing security to be applied at the data level. (Developers call this 'pass-through authentication').  This means that users with different access rights could open a screen, and one of them see more data the other without the developers having to do anything. The exception is when we decide the level of detail is high enough that the confidentiality needs are satisfied, and all users should see the same answers (common for higher level reports).
If scalability matters, then data access needs to use its own security context, with business logic doing the heavy lifting. This is primarily for scalability inside the database. Most modern SOA approaches use a centralised authentication gateway service to determine the access rights the individual needs, and apply that against the data the business logic component is allowed to expose.
Agile?Use SHIFT+ENTER to open the menu (new window).
8/23/2007 4:55 PMApproved

Many of the "agile" ideas are really good ideas - but some of them make assumptions that just don't ring true for OUR environment. Here:

  • close relationship between professionals in different parts of the lifecycle: we CAN consult with each other
  • similar culture
  • 90% of most of the systems are Business as Usual
  • problems with stakeholder buyin

A common misconception is that agile methods are a lightweight process anyone can apply - this just isn't true.  Professional agile methods require a high degree of sophistication and discipline.  There is a term for the simplistic 'just do it' approach - its called 'hacking'.

"Software development always gets to formality and precision if code is produced. So the question is not 'if formality and precision', but when and by whom it is introduced and who can review it." We advocate a middle road: we want just enough specification up front to allow for managable scope and deadlines.  We want enough design to allow the architecture team to provide some guidance, without going down to the nth degree of detail.

The base appeal of "agile" for many developers is the direct relationship with the stakeholders that it advocates, and the perceived relaxing of deadlines. Our projects typically have more stakeholders, and need to have requirements elicited and consolidated. 

Some customers have started to associate a responsive development team who truly wants to build them a good solution with 'agile' - thats just marketting.

We encourage evolutionary prototyping, an interative cyclical process, and the use of timeboxes, all within a planned approach.

We view project management, requirements and testing as areas of professional expertise, and do not accept the suggestion that developers can or should do it all. This is particularly true for projects with groups of conflicting stakeholders.

BAs and DesignersUse SHIFT+ENTER to open the menu (new window).
8/23/2007 4:52 PMApproved

1) BAs are about getting the right outcomes - Designers are about going about things the right way
Requirements are supposed to be consumed by designers, who create the technical plans for product development: and developers work from those technical documents, not from the raw requirements.

Requirements are also consumed by the testers, validating the work of both the designers and the developers.

2) Designers are the 'technical possibility thinkers' of the process - they will come up with possible ways to achieve the objectives set for them by the requirements.  Some interaction may be needed if the BAs have inadvertantly specified some 'how' - designers will often say back to BAs "if I did this, would it satify the real underpinning needs?"  Designers should be consulted frequently for three reasons:
a) "are requirements detailed enough" is a question for the designer
2b) their unique perspective on posssibility thinking can propose ways of thinking about problems that may move things forward
2c) they are a great resource for 'what has been done before', and hence might be able to be reused

3) The process provides for BAs to consult with designers

  • during the Expert Input in the Initiation Phase
  • a review at the end of the Initiation Analysis
  • when coming up with different options in the Anaysis and Synthesis stage of Planning Analysis
  • reviewing the Planning Requirements Baseline, at the end of the Planning Analysis
  • just before signoff of the detailed requirements for each increment

The PM also consults the designers directly at a several points, such as making the increment plan, developing the WBS, and during the system design both for the overall project, and for each increment.

4) What do designers do with this information?  In a typical environment, designers are responsible for producing the following sorts of artifacts:

  • Application Diagram [partition the system; choose patterns]
  • System Diagram [interfaces - web services, classes]
  • Deployment and/or Logical Datacenter Diagram [what goes where. May have different ones for Dev and Prod]
  • Threat and Vulnerability Model [aka security risk assessment]
  • Performance Model [quality of service, workload, performance objectives - risk assessment]
  • Risk-Reduction Prototypes [as necessary]
  • [In detail increments only] A list of development tasks with (a) Priorities (b) Risks and (c) Integration Sequence

[I may provide samples of some of these, for education purposes]

5) What - precisely - do BAs produce for designers?
In one sense, the requirements are produced by the BA working with business, for the use of (a) the designers and (b) the testers. So "everything" would be the answer. There are a few particular things of special interest to designers:

  • The Planning Requirements Baseline is deliberately structured to provide the information designers need - they are particularly interested in the quality attributes, but use the whole document. 
  • In some orgs, BAs develop a Conceptual Data Model, which is done in Visio. (In some other organisations, information about data is produced later in the lifecycle)
Design Deliverables Use SHIFT+ENTER to open the menu (new window).
8/23/2007 4:51 PMApproved

What deliverables to I expect out of application design?

  • Application Diagram [partition the system; choose patterns]
  • System Diagram [interfaces - web services, classes]
  • Deployment and/or Logical Datacenter Diagram [what goes where. May have different ones for Dev and Prod]
  • Threat and Vulnerability Model [aka security risk assessment]
  • Performance Model [quality of service, workload, performance objectives - risk assessment]
  • Risk-Reduction Prototypes [as necessary]
  • List of development tasks [with (a) Priorities (b) Risks (c) Integration Sequence]

Usually, I would expect the first 6 for all applications, and the seventh to be done increment by increment (with refining the existing stuff)

Can we abbreviate?
For a project on speed, I might allow just the first two and seventh before developers start work, and do 3-6 concurrently, but it really isn't a good practice. Given that 3-6 can be done in under an hour, I don't see that offering a sped-up alternative makes any sense.

Book review: Flexible Software Design Use SHIFT+ENTER to open the menu (new window).
8/23/2007 4:49 PMApproved

I recently read an interesting book, which puts an interesting proposition that I thought we might discuss... Below I have summarised their argument into two pages, instead of the 440 pages they took. ...The book was Flexible Software Design: Systems Development for Changing Requirements by Bruce Johnson, Walter W. Woolfolk, Robert Miller and Cindy Johnson Auerbach Publications © 2005 (440 pages) ISBN:9780849326509

[The authors have some old ideas in places, which I have tried to skim over (they program in COBOL and don't seem to have heard of XML), but most of the below are actual quotes - quotes in black - me in blue]

=====Introduction and Problem Statement=====

Since 72% of large projects are late, over budget or don't deliver anticipated results, if you're a sponsor of a project you have a 28% chance of success [Mark Jeffrey in Brandel, 2004]. ...In attempting to combat these troubles, organizations rely increasingly on methods of accelerating systems development, when the real problem is systems maintenance. Maintenance applies not only to legacy systems, but also to new systems, whether written last year, last month, or even still under development. ...The IT world has a problem: it is drowning in its systems maintenance backlog. The enterprise that employs IT has a problem: it cannot fully exploit its automated systems when necessary software modifications persistently remain undone. This is far more serious than how to do systems development faster.

The authors review the enormous amount of IT capability and budgets that have to go to maintenance, and the way the maintenance burden slows down ITs ability to be responsive to business. They quote findings that between 40% and 60% of development cost is respent annually on maintenance. Organizations need to change, and change frequently. But in many organizations, change is inhibited by computer systems that are resistant to modification. Inflexible systems plus changing requirements equals costly maintenance. It is virtually every enterprise's experience with maintenance that it takes too long and costs too much.

They assert that it isn't possible to design and build a system to meet business needs, because the needs are always changing. They argue that the theory that developing faster is the answer is silly if the more-quickly-developed systems are just as rigid - and besides: it doesn't help with the inflexibilities caused by the existing rigid systems.

=====Paradigm Shift=====

The authors present a number myth/realities dualities. The most interesting are:

1) The myth of the successful system
Myth: Successful computer systems usually generate few, if any, modification requests.
Reality: Successful computer systems generally generate continuous demands for modification. Often the unsuccessful system is mistaken for a successful one and vice versa. Establish a system "success metric" that includes the level of usage and modification requests by system customers.

2) The myth of the solution
Myth: Information systems are solutions to business problems.
Reality: Information systems simply offer fast, cheap, and accurate automated assistance with business functions. Substitute the concept of "automated assistance" whenever the term "solution" is used when considering systems investments.

3) The myth of the naïve customer
Myth: Customer perceptions of what it should take to implement systems modifications are grossly unrealistic; they do not appreciate how complex automated systems are. Thus, IT needs to educate customers in this matter.
Reality: Customer perceptions of what ought to be the case are realistic. What they do not perceive correctly is the inflexibility and fragility of current systems. IT must learn how to develop flexible and stable systems. Build systems consistent with the customers' accurate sense that modifying the automated system should be no harder than changing the real-world system.

=====Their solution=====

The authors assert that the ideal system allows the business users to do most maintenance themselves - and that IT should develop systems that permit that. In this environment, business users can be tasked with systemic knowledge-based innovation and continuous quality improvement. They suggest a definition for application quality to be 'the ease with which the software can be adjusted to accommodate business requirements both initially and over the life of the software'

[Staff] must understand the "life-cycle" advantages of flexibility and constantly influence the organization and the project team to dedicate the necessary resources and time to implement flexibility. This is an ongoing challenge. The authors have seen organizations give up on flexibility features when schedule or budget constraints took over. The value of flexibility hinges on the life-cycle cost, which is driven by the low-maintenance project outcome, referred to by PMI as the "cost of using the project's product":

Realistically, there are varying degrees of flexibility, depending upon what aspect of the computer systems must be modified. We define two distinct levels of flexibility strong flexibility and medium flexibility and suggest that possibly an order-of-magnitude reduction in resynchronization and maintenance costs is associated with each level:
- Strong flexibility: Only data value modifications are required.
- Medium flexibility: Only data value and local procedural code modifications are required.
Below strong and medium flexibility are systems that require changes to information structures. These are effectively inflexible, or rigid, systems. Project managers must help the organization to manage the trade-offs and evaluate life-cycle costs in an effort to determine the appropriate level of flexibility for a particular project.

=====What would be involved?=====

1) What the authors call a "stable information structure" but we might refer to as stable taxonomies with well known extension paths

2) What the authors call "processes that exploit that stability". We might call that rule-based programming leveraging rule-guided processes (as opposed to exception-based programming)

3) As discussed above, a commitment to flexibility, and a focus on full-lifecycle costs

Things the authors don't emphasise:

4) A willingness to put the ability to change into the hands of the business
a) do they have the bandwidth to take on this sort of work? Is that sort of scale-out easier than scaling-out IT staff?
b) how do conflicts get resolved?

5) Robust change control and regression testing solutions to ensure that changes don't have undesirable (and unforseen) concequences. Since Patrick tells me we are about to be a centre of testing excellence, perhaps this is straightforward.

6) ???


The authors apply the same logic to IT Strategic Plans - after reviewing the failure of organisations to actually follow their plans, and the consistency of the excuse that changing priorities caused quick workarounds, they say: "Both IT management and senior management wanted viable long-range plans and had authorized their preparation at considerable expense. It was not changing priorities that undid the plans. The plans themselves were inherently deficient because they failed to provide a stable foundation upon which to construct systems and exploit computing technology over a long term."

AGA and BAUse SHIFT+ENTER to open the menu (new window).
8/23/2007 4:43 PMApproved

The Australian Government Architecture (AGA) was published on 18 June. Agencies are not required to replace existing frameworks with that one, but AGIMO "strongly recommend that you adopt the AGA". The AGA Reference Models provide a standard structure taxonomy against which agencies and Whole-of-Government may map ICT investments, business designs and IT services/capabilities. We currently anticipate that new policy proposals over $10 million will be required to be framed in the terms of the AGA in the next budget round, and that over time that threshold may lower.

Currently they have published the Service, Data and Technology reference models. The Technology and Data RMs are for ICT to deal with: the Services reference model directly relates to the work we want to do with Deloitte - more information on this one is below.

A Business reference model is due out in Sept 07, and will be largely focused on all-of-government lines of business, which means we will probably be at the line item level, and not be impacted much by it.

A Performance reference model is also due out late in 07, and will have significant ramifications for reporting: currently there is a 6 page outline. We should probably gear up to leverage this reference model when it comes out.

A mind map showing the headings of the Services reference model is enclosed, and the detailed document is at: Pages 25-50 relates to the Services reference model.

My view:

1. Ignoring the AGA would be inappropriate. At a minimum, we should consider its value as an input to our process, and provide feedback to AGIMO on our findings.

2. There may be some value in being seen as an early adopter / contributor

Example engagement planUse SHIFT+ENTER to open the menu (new window).
8/23/2007 4:38 PMApproved

Purpose of engagement: Produce a report which provides a plan and scope for establishing a modern service-orientated business architecture which can be used for business management, business improvement, and as a foundation for ICT planning 2-5 years out. The report must provide expert advice on processes, methodologies, and tools that are appropriate in three principle areas:

  • STRUCTURE and CONTENT: what should go into our business architecture,
  • PROCESS: how it should be constructed, including options for a gradual incremental approach and for a more rapid approach, and a cost-benefit analysis between those approaches
  • USE and CHANGE: how the various pieces of the business architecture can be used for maximum business benefit, and maintained/improved over time as our environment changes.

The report should also outline how we could leverage the consultant in this process over time, for example through an ongoing assurance role during the development of the business architecture.

The report should address issues like the following:

1. The "Service Orientated Business Architecture" diagram (enclosed). This positions the elements of the Business Architecture, and makes clear that this is supposed to feed into Service Orientated Architecture. We see a considered replacement of this early draft as a key deliverable.

2. We have identifed a number of particular things we could leverage in doing Business Architecture work, and would like advice on which are suitable for us, and where there might be better approaches - and the relative merit of the different areas. Similiarly we would like ideas on how to address any other areas identified as important parts of a Business Architecture.

  • Should we consider using the Microsoft Services Business Architecture offering to assist in capability mapping? (This offering had the development code name "Microsoft Motion") Are there other offerings that we should consider?
  • Should we use the Business Motivation Model (an international standard from OMG) for consolidating and correlating business plans with strategic goals? Is there a different approach that would be better for us?
  • Should we consider working on Value Streams/Chains, or more basic Business Process mapping and management, or neither?
  • Is Information Flow mapping something we should consider at this (or any) stage?
  • Do modern innovations with Business Process Management suggest enhanced reporting capabilities we should consider?
  • We have access to a framework for Business Process Improvement from another agency - should we consider asking the consultant to review it?
  • The public literature on Business Architecture is not very customer or stakeholder focused, and we view this as a critical weakness. Can we merge information from something like the Customer Expectation Management Model into our Business Architecture to ensure that this dimension is covered? Is there a better way to achieve this goal?
  • Guidance on particular deliverables and processes to assist ICT so they can leverage the Business Architecture.

3. We intend for Business Architecture to have a significant impact throughout the organisation. Does the consultant have any advice on how we should go about relating it to existing structures, processes and initiatives? Cultural change issues? Advince on governance and ensuring engagement?

4. Given developments in other agencies, in the Public Service Commission, and at the Department of Finance in particular, and given our status as a DHS agency, are there other inputs we should leverage? If so, how should we leverage them into the Business Architecture? For example:

  • Australian Government's 2006 e-Government Strategy, Responsive Government: A New Service Agenda
  • The Services Reference Model from the 2007 Australian Government Architecture
  • Guidance from the Integrated Transactions Reference Group
  • Guidance from the APSC on capability development and capability models
  • Finance's 2007 guidance on the Business Case Toolset

5. Checkpoints: We are keen to use the architecture as an input to things like:

  • Making transactions less complex and hence cheaper by improving services to reduce the volume of complaints and objections.
  • Understanding the value of personal productivity tools and collaboration capabilities within the agency, between us and our customers and stakeholders, and for our customers among themselves.
  • Understanding interdependencies as a prelude to possible internal charge back arrangements.
  • ICT Strategic Planning decisions about people, process and technology.

This engagement is NOT to include:

  • a cost/benefit analysis of Business Architecture: it is to plan doing the work.
  • actually producing the Business Architecture: it is about scoping and planning.
  • extensive consultation, although we realise that some consultation will be necessary to provide advice targetted to CSA's particular circumstances. It is about the consultant making an intellectual contribution - telling us things we don't know.
  • a comprehensive canvassing of options: it is to give precise advice as the foundation for a follow-on activity: producing pieces of the Business Architecture.
Starting point for Dev guidelines on securityUse SHIFT+ENTER to open the menu (new window).
8/23/2007 4:36 PMApproved
Exerpt from IT Auditing: Using Controls to Protect Information Assets by Chris Davis, Mike Schiller and Kevin Wheeler McGraw-Hill © 2007 (387 pages) ISBN:9780072263435

Auditing Applications

STRIDE is a methodology for identifying known threats. The STRIDE acronym is formed from the first letter of each of the following categories and is an example of a simplified threat-risk model that is easy to remember and apply.

Spoofing Identity Identity spoofing is a key risk for applications that have many users but provide a single execution context at the application and database levels. In particular, users should not be able to become any other user or assume the attributes of another user.

Tampering with Data Users can potentially change data delivered to them, return it, and thereby potentially manipulate client-side validation, GET and POST results, cookies, HTTP headers, and so forth. The application should not send data to the user, such as interest rates or periods, that are obtainable only from within the application itself. The application also should carefully check data received from the user and validate that it is sane and applicable before storing or using it. For web and other applications with a client component, ensure you perform your validation checks on the server and not the client, where the validation checks might be tampered with.

Repudiation Users may dispute transactions if there is insufficient auditing or recordkeeping of their activity. For example, if a user says, "But I didn't transfer any money to this external account!" and you cannot track his or her activities through the application, then it is extremely likely that the transaction will have to be written off as a loss.

Therefore, consider if the application requires non-repudiation controls, such as web access logs, audit trails at each tier, or the same user context from top to bottom. Preferably, the application should run with the user's privileges, not more, but this may not be possible with many commercial off-the-shelf applications.

Information Disclosure Users are rightfully wary of submitting private details to a system. If it is possible for an attacker to publicly reveal user data at large, whether anonymously or as an authorized user, there will be an immediate loss of confidence and a substantial period of reputation loss. Therefore, applications must include strong controls to prevent user ID tampering and abuse, particularly if they use a single context to run the entire application.

Also consider if the user's web browser may leak information. Some web browsers may ignore the no-caching directives in HTTP headers or handle them incorrectly. In a corresponding fashion, every secure application has a responsibility to minimize the amount of information stored by the web browser just in case it leaks or leaves information behind which can be used by an attacker to learn details about the application or the user, possibly using that information to assume the role of an authorized privileged user.

Finally, in implementing persistent values, keep in mind that the use of hidden fields is insecure by nature. Such storage should never be relied on to secure especially sensitive information or to provide adequate personal privacy safeguards.

Denial of Service Application designers should be aware that their applications may be subject to a denial-of-service attack. Therefore, the use of expensive resources such as large files, complex calculations, heavy-duty searches, or long queries should be reserved for authenticated and authorized users and not be available to anonymous users.

For applications that don't have this luxury, every facet of the application should be engineered to perform as little work as possible, to use fast and few database queries, and to avoid exposing large files or unique links per user in order to prevent simple denial-of-service attacks.

Elevation of Privilege If an application provides distinct user and administrative roles, then it is vital to ensure that the user cannot elevate his or her role to a higher privileged one. In particular, simply not displaying privileged-role links is insufficient. Instead, all actions should be gated through an authorization matrix to ensure that only the permitted roles can access privileged functionality.

Best Practices
These best practice concepts help you quickly spot common weak areas and poor controls.
Apply Defense in Depth
Layered approaches provide more security over the long term than one complicated mass of security architecture. You might, for example, use access-control lists (ACLs) on the networking and firewall equipment to only allow necessary traffic to reach the application. This approach significantly lowers the overall risk of compromise to the system on which the application is running because you quickly eliminate access to services, ports, and protocols that otherwise would be accessible to compromise.

Use a Positive Security Model
Positive (whitelist) security models only allow what is on the list, excluding by default everything else. However, negative (blacklist) security models allow everything by default, eliminating only the items you know are bad. This is the challenge to antivirus programs, which you must update constantly to keep up with the number of new possible attacks (viruses) that could affect you. The problem with this model, if you are forced to use it, is that you absolutely must keep the model updated. Even with the model updated, a vulnerability could exist that you don't know about, and your attack surface is much larger than if you used a positive security model.

Fail Safely
There are basically three responses when an application fails. You can allow, block, or error. In general, application errors should fail in the same manner as a disallow operation as viewed from the end user. This is important because then the end user doesn't have additional information to use that may help him or her to compromise the system. Log what you want, and keep any messages that you want elsewhere, but don't give the user additional information he or she might use to compromise your system.

Run with Least Privilege
The principle of least privilege mandates that accounts have the least amount of privilege possible to perform their activity. This encompasses user rights and resource permissions such as CPU limits, memory capacity, network bandwidth, and file system permissions.

Avoid Security by Obscurity
Obfuscating data, or hiding it instead of encrypting it, is a very weak security mechanism, especially for an application. If a human was able to figure out how to hide the data, there is a good chance that another person is going to learn how to recover the data. A real-world example is how some people will hide a key to their house under the doormat. A criminal wants the easiest possible way into the house and will check common places such as the doormat, the closest rock to the door, and above the door frame for a key. Never obfuscate critical data that can be encrypted or never stored in the first place.

Keep Security Simple
Simple security mechanisms are easy to verify and easy to implement correctly. Bruce Schnieir is famous for suggesting that the quickest method to break a cryptographic algorithm is to go around it. Avoid overly complex security mechanisms, if possible. Developers should avoid the use of double negatives and complex architectures when a simpler approach would be faster and simpler. Don't confuse complexity with layers. Layers are good. Complexity isn't.

Detect Intrusions and Keep Logs
Applications should have built-in logging that's protected and easily read. Logs help you to troubleshoot issues and, just as important, help you to track down when or how an application might have been compromised.

Never Trust Infrastructure and Services
Many organizations use the processing capabilities of third-party partners, who more than likely have differing security policies and postures than you. It is unlikely that you can influence or control any external third party, whether they are home users or major suppliers or partners. Therefore, implicit trust of externally run systems is dangerous.

Establish Secure Defaults
Your applications should arrive to you or be presented to the users with the most secure default settings possible and still allow business to function. This may require training end users or communications messages, but the end result is a significantly reduced attack surface, especially when an application is pushed out across a large population.

Use Open Standards
Where possible, base security on open standards for increased portability and interoperability. Since your infrastructure is likely a heterogeneous mix of platforms, the use of open standards helps to ensure compatibility between systems as you continue to grow. Additionally, open standards are often well known and scrutinized by peers in the security industry to ensure that they remain secure.


Performing the Application Audit
These steps generally refer to controls specific to the application and do not address general controls at the level of the network, operating system, and database management system. Please refer to other sections of this book for general controls at these levels and consider the frameworks and concepts enumerated earlier in this chapter as you approach developing the audit program for your application.

Part 1Input Controls
Incorrectly implemented or checked input controls consistently remain top reasons why applications suffer vulnerabilities. Examples include buffer overflows and injection attacks.

1 Review and evaluate data input controls.
As much as possible, online transactions should perform upfront validation and editing in order to ensure the integrity of data before they are entered into the system's files and databases.

Verify that invalid data are rejected or edited on entry. The auditor will need to understand the business function being supported by the system and the purpose and use of its various data elements. This likely will require discussion not only with the developers but also with the end users. Once the purpose of the system and its data are understood, it will be much easier for the auditor to think through the various data-integrity risks associated with the application. In some cases, a code review may be appropriate if the developers are available and the auditor is a knowledgeable coder. Poorly written, commented, or formatted code is often a red flag that suggests that a deeper review is needed. Some basic examples of good data input controls include

    • Fields that are intended to contain only numbers should not allow entry of alphanumeric characters.
    • Fields that are used to report such things as dates and hours should be set up to either require input in the correct format (such as MMDDYY or HHMM) or to transform input into the correct format.
    • Where applicable, transactions should perform "reasonableness" checks on inputs. An example would be preventing users from reporting labor of more than 24 hours in a day or more than 60 minutes in an hour. Another example would be disallowing entry for time, costs, etc. for an employee who has been terminated or who is on LOA.
    • When there are a finite number of valid entries for a field, entries that are invalid should not be allowed. In other words, input screens should validate such things as cost centers, account numbers, product codes, employee numbers, etc. against the appropriate database(s) or file(s).
    • Duplicate entries should not be allowed for data that are intended to be unique. For example, the transaction should not allow a product code to be added to the product database if that code already exists on the database.
    • Where applicable, the transaction should perform "logic" checks. For example, if there were a transaction used by ticket agents to record how many seats were sold on a flight and how many no-shows there were, the transaction should not allow the agent to input numbers indicating that there were more no-shows than seats sold.
    • Each input screen generally has certain fields that are required for the transaction to be processed accurately. Execution of a transaction should not be allowed until valid data are entered into each of those fields.
    • Where applicable, transactions should perform "calculation" checks on inputs. For example, the system should ensure that journal-entry credits and debits balance to zero before processing a transaction. Another example would be a labor-entry system where hours charged for the week need to add up to at least 40.
    • Programmed cutoff controls should be in place to help prevent users from recording transactions in the wrong period. In other words, the screen should not allow users to record transactions in prior accounting periods.
    • Database operatives such as *, =, or select) should be disallowed as valid input.

2 Determine the need for error/exception reports related to data integrity and evaluate whether this need has been filled.

Error or exception reports allow any potential data-integrity problems to be reviewed and corrected when it's not feasible or practical to use input controls to perform upfront validation of data entered into the system. For example, while it may not be inherently wrong for an employee to enter 80 hours of overtime for one week into a labor system, this sort of unusual event should be captured on a report for review by the appropriate level of management.

Discuss the application's error and exception handling with the developer or administrator. Based on the results of the analysis from step 1, look for opportunities for additional data integrity checks (that may not have been feasible to perform with "hard" upfront input requirements). Again, discussions with the end users can be very helpful here. Ask them what sorts of reporting would be helpful for them in catching anomalies and errors. In some cases, a code review may be appropriate if the developers are available and the auditor is a knowledgeable coder. Poorly written, commented, or formatted code is often a red flag that suggests that a deeper review is needed.

Part 2Interface Controls
3 Review and evaluate the controls in place over data feeds to and from interfacing systems.
When an application passes and/or receives data to or from other applications, controls need to be put in place that ensure that the data are transmitted completely and accurately.

Discuss data-feed controls with the application developer or administrator. Expect to see basic controls such as

    • Control totals from interface transmissions should be generated and used by the system to determine whether the transmission completed accurately. If a problem is found, reports should be issued that notify the proper people of the problem. Some examples of control totals that may be applicable are hash totals, record counts, and total amounts (for numerical records). Another type of control total could flag missing record numbers when records are transmitted in a sequential fashion.
    • The system should handle items that did not transmit successfully in such a manner that reports and/or processes enable these items to be resolved quickly and appropriately with audit trails as appropriate.
    • Data files that contain interface source or target information should be secured from unauthorized modifications. This may mean appropriate authentication controls, authorization controls, or encryption where necessary.
    • When it is not feasible to use control totals to verify accurate transmission of data, reconciliation reports should be generated that allow users to compare what was on one system with what was received on another system.
    • Where applicable, data validation and editing, as described in the "Input Controls" section of this checklist, should be performed on data received from outside systems. Error/exception reports should be generated that allow any data-integrity problems to be corrected.

4 In cases where the same data are kept in multiple databases and/or systems, ensure that periodic sync processes are executed to detect any inconsistencies in the data.

Determine with the help of the application developer or application administrator where this sort of control is applicable, and review for its existence and effectiveness.

Part 3Audit Trails
5 Review and evaluate the audit trails present in the system and the controls over those audit trails.
Audit trails are useful for troubleshooting and helping to track down possible breaches of your application.
Review the application with the developer or administrator to ensure rinformation is captured when key data elements are changed. This information should include in most cases the original and new values of the data, who made the change, and when the change was made. This information should be kept in a secured log in order to prevent unauthorized updates. The logs should be retained for a reasonable period of time, such as three or six months, in order to aid investigations into errors or inappropriate activities.

6 Ensure the system provides a means to trace a transaction or piece of data from the beginning to the end of the process enabled by the system.

This is important to verify that the transaction was fully processed and to pinpoint any errors or irregularities in the processing of that data.

Review the application with the developer or administrator and evaluate the existence of this ability.
Part 4 Access Controls
7 Ensure the application provides a mechanism which authenticates users, based, at a minimum, on a unique identifier for each user and a confidential password.

Failure to authenticate users or just having a poor authentication scheme presents an open opportunity for curious users and malicious attackers.

Review the application with the developer or administrator and verify appropriate authentication measures exist commensurate with the type of data on the application. For example, two-factor authentication might be required in some cases to authenticate users in sensitive environments or for end-users accessing your network from their homers.

8 Review and evaluate the application's authorization mechanism to ensure users are not allowed to access any sensitive transactions or data without first being authorized by the system's security mechanism.

The system's security mechanism should allow for each system user to be given a specific level of access to the application's data and transactions.

Employees should only be given the amount of access to the system which is necessary for performing their jobs. Review the application with the developer or administrator, and verify this functionality in the application. In other words, it should be possible to specify which specific transactions and datasets or files a system user will access. In general, it also should be possible to specify what level of access (e.g., display, update, and delete) the user will receive to application resources.

9 Ensure that the system's security/authorization mechanism has an administrator function with appropriate controls and functionality.

The administrator user function should exist to help administer users, data, and processes. This account or functionality should be tightly controlled in the application to prevent compromise and disruption of services to other users.

Evaluate the use of the administrator function with the developer or application administrator. The user of this function should have the ability to add, delete, or modify user access to the application system and its resources. The security mechanism should also provide the ability to tightly control who has access to this administrator function. Also ensure that the system's security mechanism provides the system's security administrator with the ability to view who has access to the system and what level of access they have.

10 Determine whether the security mechanism enables any applicable approval processes.
The application's security mechanism should support granular controls over who can perform what approval processes, and then lock data that has been formally approved from modification by a lower authority. Otherwise, a lower authority or malicious user could modify or corrupt data in the system.

Verify with the developer or application administrator that appropriate controls are in place. For example, if there is a need for sign-off of journal entries before they can be passed on to the general ledger, the system's security mechanism should provide a means for defining who is authorized to perform this sign-off. Any data that have been through this sort of approval process should be locked from any further modifications.

Interviews with system users are a good mechanism for helping the auditor to determine the need for this sort of ability. It is critical for the auditor to understand not only the technical aspects of the application being reviewed but also the business purpose.

11 Ensure that a mechanism or process has been put in place that suspends user access on termination from the company or on a change of jobs within the company.

Poor deprovisioning processes may leave a user with inappropriate access to your application long after the access or authority should have been removed.

Verify appropriate deprovisioning processes are in place with the developer and application administrator. Be sure to look at both the application and the procedures around the application to ensure they are being followed and are capable of being followed as written.

For applications that have been in "production" for some time, select a sample of system users, and validate that their access is still appropriate. Alternatively, if possible, select a sample of system users who have changed jobs or left the company, and ensure that their access has been removed.

12 Verify that the application has appropriate password controls.
The appropriateness of the password controls depends on the sensitivity of the data used within the application. Overly weak passwords make the application sensitive to compromise, and overly strong passwords often force users to write them down in plain sight or to never want to change their password.

Verify appropriate password controls with the help of the developer or the application administrator. For example, three-digit PIN numbers probably are inappropriate for applications that store credit-card data, and a 20-character password probably is overly paranoid for someone trying to access his or her voicemail. Ensure that the security mechanism requires users to change their passwords periodically (e.g., every 30 to 90 days). When appropriate, the security mechanism also should enforce password composition standards such as the length and required characters. Additionally, the security mechanism should suspend user accounts after a certain number of consecutive unsuccessful log-in attempts. This is typically as low as 3 and can be as high as 25 depending on the application, other forms of authentication required, and the sensitivity of the data.

13 Review and evaluate processes for granting access to users. Ensure that access is granted only when there is a legitimate business need.

Users should have intentional access granted and governed by the application to prevent unauthorized access to areas outside the intended scope for the user. The application should have controls in place to prevent users from having more access than is required for their role. This step embodies the concept of least-privileged access.

Review the application with the developer or administrator. Possibly select a sample of users, and ensure that user access was approved appropriately. Verify that the authorization mechanism is working appropriately.

14 Ensure that users are automatically logged off from the application after a certain period of inactivity.
A person could have access to the application if they walk up to a logged-in workstation where the previous user didn't log off and the application is still active.

Review the application with the developer or administrator to evaluate the existence of this ability.
15 Evaluate the use of encryption techniques to protect application data.
The need for encryption is determined most often by either policy, regulation, the sensitivity of the network, or the sensitivity of the data in the application. Where possible, encryption techniques should be used for passwords and other confidential data that are sent across the network. This prevents other people on the network from "sniffing" and capturing this information.

Review the application with the developer or administrator to evaluate the existence of encryption where appropriate.
16 Evaluate application developer access to alter production data.
In general, system developers should not be given access to alter production data.
Discuss with the developer or administrator, and evaluate the separation of duties between developers and administrators.

Part 5 Software Change Controls
Software change management (SCM), used by a trained software development team, generally improves the quality of code written, reduces problems, and makes maintenance easier.

17 Ensure that the application software cannot be changed without going through a standard checkout/ staging/testing/approval process after it is placed into production.

It should not be possible for developers to update production code directly. Should a failure in the application occur without enforced software change controls, then it might be difficult to impossible to track down the cause of the problem. Additionally, developers should not have access to data in production applications. This is particularly true if the application data are sensitive.

Evaluate this capability with the developers and application administrator.
18 Evaluate controls around code checkout, modification, and versioning.
Strong software controls around code checkout, modification, and versioning provide accountability, protect the integrity of the code, and have been shown to improve maintenance and reliability.

Verify with the developers that the software-change mechanism requires developers to check out code that they wish to modify. If another developer wishes to modify the same code while it is still checked out, he or she should be prevented from doing so. Alternatively, the second developer could be warned of the conflict but allowed to perform the checkout. In such a case, a notification of the duplicate checkout should be sent automatically to the original developer.

Additionally, ensure that the software-change mechanism requires sign-off before code will be moved into production. The system should require that this sign-off be performed by someone other than the person who developed or modified the code. In addition, the software-change mechanism should allow for specific people to be authorized to perform sign-off on the system's programs. The people with this authorization should be kept to a minimum.

Evaluate controls in place to prevent code from being modified after it has been signed off on but before it has been moved to production. Ensure that the software-change mechanism ‘versions’ software so that past versions of the code can be retrieved, if necessary.

19 Evaluate controls around the testing of application code before it is placed into a production environment.
Improperly tested code may have serious performance or vulnerability issues when placed into production with live data.
Determine whether the software-change process requires evidence of testing, code walkthroughs, and adherence to software-development guidelines. These should occur before the approver signs off on the code. Testing of any software development or modifications should take place in a test environment using test data. Determine whether this is the case.

Part 6 Backup and Recovery
20 Ensure that appropriate backup controls are in place.
Failure to back up critical application data may severely disrupt business operations in the event of a disaster.
Determine whether critical data and software are backed up periodically (generally weekly full backups with daily incremental backups for the data) and stored off-site in a secured location. If cost beneficial and appropriate, duplicate transaction records should be created and stored in order to allow recovery of data files to the point of the last processed transaction. Also ensure that the application code is backed up and stored offsite in a secured location, along with any tools necessary for compiling and using the code.

21 Ensure that appropriate recovery controls are in place.
Recovery procedures and testing are necessary to ensure that the recovery process is understood and functions operationally as intended.

Discuss with the application administrator and appropriate personnel to ensure that detailed recovery procedures are documented that define what tasks are to be performed, who is to perform those tasks, and the sequence in which they are to be performed. Testing of the recovery from backup tapes using the documented recovery procedures should be performed periodically.

Part 7 Data Retention and Classification
22 Evaluate controls around the application's data retention.
Data should be archived and retained in accordance with tax and legal requirements.
Evaluate the appropriateness of the controls with the developers and application administrator. These requirements will vary based on the type of data and should be acquired from the appropriate departments within your company.

23 Evaluate the controls around data classification within the application.
All application data should be assigned a business owner, and this owner should classify the data (e.g., public, internal use only, or confidential).

This classification should appear on any reports or transactions that display system data. Determine whether this has been done.

Part 8Operating System, Database, and Other Infrastructure Controls
Detailed guidelines for controlling the operating system, database, and other related infrastructure components are beyond the scope of this chapter. However, security of the infrastructure on which the application resides is a critical part of application security. The applicable audit programs from this book's other chapters should be executed in addition to the application-specific steps provided earlier in this chapter.

Master Checklists
Application Best Practices
Checklist for Best Practices

    • qApply defense-in-depth.
    • qUse a positive security model.
    • qFail safely.
    • qRun with least privilege.
    • qAvoid security by obscurity.
    • qKeep security simple.
    • qDetect intrusions and keep logs.
    • qNever trust infrastructure and services.
    • qEstablish secure defaults.
    • qUse open standards.

Auditing Applications
Checklist for Auditing Applications

    1. qReview and evaluate data input controls.
    2. qDetermine the need for error/exception reports related to data integrity, and evaluate whether this need has been fulfilled.
    3. qReview and evaluate the controls in place over data feeds to and from interfacing systems.
    4. qIn cases where the same data are kept in multiple databases and/or systems, periodic ‘sync’ processes should be executed to detect any inconsistencies in the data.
    5. qReview and evaluate the audit trails present in the system and the controls over those audit trails.
    6. qThe system should provide a means to trace a transaction or piece of data from the beginning to the end of the process enabled by the system.
    7. qThe application should provide a mechanism that authenticates users based, at a minimum, on a unique identifier for each user and a confidential password.
    8. qReview and evaluate the application's authorization mechanism to ensure that users are not allowed to access any sensitive transactions or data without first being authorized by the system's security mechanism.
    9. qEnsure that the system's security/authorization mechanism has an administrator function with appropriate controls and functionality.
    10. qDetermine whether the security mechanism enables any applicable approval processes.
    11. qEnsure that a mechanism or process has been put in place that suspends user access on termination from the company or on a change of jobs within the company.
    12. qVerify that the application has appropriate password controls.
    13. qReview and evaluate processes for granting access to users. Ensure that access is granted only when there is a legitimate business need.
    14. qEnsure that users are automatically logged off from the application after a certain period of inactivity.
    15. qEvaluate the use of encryption techniques to protect application data.
    16. qEvaluate application developer access to alter production data.
    17. qEnsure that the application software cannot be changed without going through a standard checkout/staging/testing/approval process after it is placed into production.
    18. qEvaluate controls around code checkout, modification, and versioning.
    19. qEvaluate controls around the testing of application code before it is placed into a production environment.
    20. qEnsure that appropriate backup controls are in place.
    21. qEnsure that appropriate recovery controls are in place.
    22. qEvaluate controls around the application's data retention.
    23. qEvaluate controls around data classification within the application.

NSA INFOSEC Assessment Methodology
The National Security Agency INFOSEC Assessment Methodology (NSA IAM) was developed by the U.S. National Security Agency and incorporated into its INFOSEC Training and Rating Program (IATRP) in early 2002.

NSA INFOSEC Assessment Methodology Concepts
The NSA IAM is an information security assessment methodology that baselines assessment activities. It breaks information security assessments into three phases: pre-assessment, on-site activities, and post-assessment. Each of these phases contains mandatory activities to ensure information security assessment consistency. It is important to note, however, that NSA IAM assessments consist of only documentation review, interviews, and observation. There is no testing done during an NSA IAM assessment. The NSA released the INFOSEC Evaluation Methodology to baseline testing activities.

Pre-assessment Phase
The purpose of the pre-assessment phase is to define customer requirements, set the assessment scope and determine assessment boundaries, gain an understanding of the criticality of the customer's information, and create the assessment plan. The NSA IAM measures both organizational information criticality and system information criticality. Organizational information consists of the information required to perform major business functions. System information then is identified by analyzing the information that is processed by the systems that support the major business functions.

The NSA IAM provides matrices that are used to analyze information criticality. A matrix is created for each organization/business function and each system that supports the organization. The vertical axis consists of the information types, whereas the horizontal axis includes columns for confidentiality, integrity, and availability. Information criticality impact values are assigned for each cell. Table 13-1 is an example of a human resources organization information criticality matrix.

Table 13-1: Organizational Information Criticality Matrix

Information Type        Confidentiality Integrity       Availability   
Payroll H       H       M      
Benefits        L       M       L      
Employee performance appraisals H       H       L      
On-Site Activities Phase

The on-site activities phase consists of validating pre-assessment-phase conclusions, gathering assessment data, and providing initial feedback to customer stakeholders. There are 18 baseline areas that are evaluated during an IAM assessment:

    • Information security documentation such as policies, procedures, and baselines
    • Roles and responsibilities
    • Contingency planning
    • Configuration management
    • Identification and authentication
    • Account management
    • Session controls
    • Auditing
    • Malicious code protection
    • System maintenance
    • System assurance
    • Networking/connectivity
    • Communications security
    • Media controls
    • Information classification and labeling
    • Physical environment
    • Personnel security
    • Education, training, and awareness

Post-assessment Phase
Once the assessment information is gathered, it is analyzed and consolidated into a report in the final post-assessment phase. The final report includes an executive summary, recognition of good security practices, and a statement regarding the overall information security posture of the organization. Additional information regarding the NSA INFO-SEC Assessment and Evaluation Methodologies can be found at

Information Risk Management
Identify Assets; Quantify and qualify threats; Assess vulnerabilities; redmidiate control gaps; manage ongoing risk
Phase 1Identifying Information Assets
The first phase in the risk-management life cycle is to identify the organization's information assets. There are several tasks that must be completed in order to be successful. These steps include the following:

    • Define information criticality values
    • Identify business functions
    • Map information processes
    • Identify information assets
    • Assign criticality values to information assets

The goal of this phase is to identify all information assets and assign each information asset a criticality value of high, medium, or low for its confidentiality, integrity, and availability requirements. For example, we may identify credit card information as an information asset that is processed by our retail system. This information asset is governed by the Payment Card Industry (PCI) data security standard and is valuable to thieves if disclosed in an unauthorized manner. We also know that if altered, this information is useless to us but that in most cases a temporary loss of access to this information is tolerable. As a result, we would assign credit card information values of high for both confidentiality and integrity and medium for availability.

The best way to identify information assets is to take a top-down approach beginning with organization functions, identifying the processes that support those business functions, and drilling down to the information assets that are processed. Figure 15-2 represents this approach to information asset identification using business function decomposition.

Mapping Information Processes
Since the nature of IT is to process information, IT risk (as opposed to other types of risk) has the added complexity of touching several points in a process. Identifying these process flows is absolutely critical for a few reasons:

    • It helps us to identify which information assets are used by each process.
    • It helps us to identity process points (steps) that require manual input (which tend to be more vulnerable than fully automated processes).
    • It helps us to understand which information systems need protection.

Once we have identified our organization's critical business functions, we can begin to identify the processes that support those business functions and the information assets that flow though the processes. It is important to note that we are not concerned with the technology used to process the information at this point but rather with the process flow itself.

We had identified the retail operations business function earlier. We know that the retail operations business function is responsible for processing credit card transactions that feed all the company's cash flow and are regulated by PCI. Thus, we can identify credit card processing as a critical process. From here we will need to determine the steps or systems (process points) that are included in the process. For our example, we may determine that credit cards are processed in the following manner (Figure 15-3):

    1. Associate swipes a credit card during a retail sale.
    2. Transactions are aggregated to a system within each retail store.
    3. Aggregated transactions are transmitted to the main office during the night over the Internet via site-to-site virtual private networks (VPNs).
    4. Store transactions are aggregated with transactions from all the other stores.
    5. All transactions are sent to the credit-card processing house over a dedicated telecom data link in a batch file the following day.
    6. The credit card processing house deposits funds into a corporate bank account 2 days later.
IT Governance Maturity Model Use SHIFT+ENTER to open the menu (new window).
8/23/2007 4:35 PMApproved

The IT Governance Institute was founded in 1998 as a think tank. It is a nonprofit, vendor-neutral, based in the US. More information is available at Below is an extract from a book I was reading recently:

ITGI developed a maturity model for the internal control of IT that provides to organizations a pragmatic and structured approach to measuring how well developed their processes are against a consistent and easy-to-understand scale. The maturity model was fashioned after the one originated by the Software Engineering Institute (SEI) for software development. SEI is a federally funded research and development center sponsored by the U.S. Department of Defense and operated by Carnegie Mellon University.

ITGI expanded the basic concept of the maturity model by applying it to the management of IT processes and controls. The principles were used to define a set of levels that allow an organization to assess where it is relative to the control and governance over IT. […T]hese levels are presented on a scale that moves from nonexistent on the left to optimized on the right. By using such a scale, an organization can determine where it is and define where it wants to go, and if it identifies a gap, it can do an analysis to translate the findings into projects. Reference points can be added to the scale. Comparisons can be performed with what others are doing if those data are available, and the organization can determine where emerging international standards and industry best practices are pointing for the effective management of security and control.


Maturity Level

Status of the Internal Control Environment

Establishment of Internal Controls



There is no recognition of the need for internal control. Control is not part of the organisation culture or mission. There is a high risk of control deficiencies and incidents.

There is no intent to assess the need for internal control. Incidents are dealt with as they arise.


Initial/ ad hoc

There is some recognition of the need for internal control. The approach to risk and control requirements is ad hoc and disorganised, without communication or monitoring. Deficiencies are not identified. Employees are not aware of their responsibilities

There is no awareness of the need for assessment of what is needed in terms of IT controls. When performed, it is only on an ad hoc basis, at a high level and in reaction to significant incidents. Assessment addresses only the actual incident.


Repeatable but Intuitive

Controls are in place but are not documented. Their operation is dependent on knowledge and motivation of individuals. Effectiveness is not adequately evaluated. Many control weaknesses exit and are not adequately addressed; the impact can be severe. Management actions to resolve control issues are not prioritised or consistent. Employees may not be aware of their responsibilities.

Assessment of control needs occurs only when needed for selected IT processes to determine the current level of control maturity, the target level that should be reached and the gaps that exist An informal workshop approach, involving IT managers and the team involved in the process, is used to define an adequate approach to controls for the process and to motivate an agreed action plan.


Defined Process

Controls are in place and are adequately documented. Operating effectiveness is evaluated on a periodic basis and there is an average number of issues. However, the evaluation process is not documented. While management is able to deal predictably with most control issues, some control weaknesses persist and impacts could still be severe. Employees are aware of their responsibilities for control.

Critical IT processes are identified based on value and risk drivers. A detailed analysis is performed to identify control requirements and the root cause of gaps and to develop improvement opportunities. In addition to facilitated workshops, tools are used and interviews are performed to support the analysis and ensure that an IT process owner owns and drives the assessment and improvement process.


Managed and Measurable

There is an effective internal control and risk management environment. A formal, documented evaluation of controls occurs frequently. Many controls are automated and regularly reviewed. Management is likely to detect most control issues but not all issues are routinely identified. There is consistent follow-up to address identified control weaknesses. A limited, tactical use of technology is applied to automate controls.

IT process criticality is regularly defined with full support and agreement from the relevant business process owners. Assessment of control requirements is based on policy and the actual maturity of these processes, following a thorough and measured analysis involving key stakeholders. Accountability for these assessments is clear and enforced. Improvement strategies are supported by business cases. Performance in achieving the desired outcomes is consistently monitored. External control reviews are organised occasionally.



An enterprise-wide risk and control programme provides continuous and effective control and risk issues resolution. Internal control and risk management are integrated with enterprise practices, supported with automated real-time monitoring with full accountability for control monitoring, risk management and compliance enforcement. Control evaluation is continuous, based on self-assessments and gap and root cause analyses. Employees are proactively involved in control improvements.

Business changes consider the criticality of IT processes, and cover any need to reassess process control capability. IT process owners regularly perform self-assessments to confirm that controls are at the right level of maturity to meet business needs and they consider maturity attributes to find ways to make controls more efficient and effective. The organisation benchmarks to external best practices and seeks external advice on internal control effectiveness. For critical processes, independent reviews take place to provide assurance that the controls are at the desired level of maturity and working as planned.

Handling PMs betterUse SHIFT+ENTER to open the menu (new window).
8/23/2007 4:34 PMApproved

We don't treat PMs the way we treat stakeholders: and that is a mistake.

1) Prepare for the meeting: what do you want to take them through? Would an agenda help? A printout?

2) Speak less - NEVER read text out from the screen or from the printout: let them read it, and then discuss.  While they are supposed to be reading (or - gasp - thinking) just be quiet. The silence does not need to be filled.  BTW: what you say is neither compelling nor convinving: what they say is FACT: try to get them to say things.

3) Let them digress - improving their understanding of issues all around the stuff on the screen is a good use of time - but let them know it's a digression if doing otherwise might raise expectations

4) When you are controlling the agenda of the meeting, and that will make them profoundly uncomfortable. Partially to compensate for that, make them control the pace of the meeting.  With some people, you wait until you see them tick the item off, and then you proceed. With others, they clearly say something like "what's next". But some people need a bit of coaching to get them to communicate this clearly to you. Once they are ready to go on, do so. If it's going too slowly, put them in charge: "At this pace, we won't finish today - what would you like to do?"

5) Besides forming a common understanding of requirements, good meetings end up with each participant having a todo list. Don't be worried about having them look further into things (particularly with their project plan, or the dev team).  Try to assign responsibilities etc when the item is discussed: you don't want assumptions here. Confirm via email after the meeting, including schedule. WHAT-WHO-WHEN

6) If you find youself talking over them - not letting them finish a sentence or a thought - that is a very bad thing. (And raising your voice to speak over them is even worse.) Try teaching yourself to only talk when you are NOT holding a pen in your hand - and hold it until they stop talking. Why a pen? Because you can take notes on what they are saying.  Try to have a slight pause before you start speaking: subconsciously it makes the other people believe that you thought for a second about what to say.

Please review the article at this link:

Links in Use - SOAUse SHIFT+ENTER to open the menu (new window).
8/23/2007 4:33 PMApproved

SOA Runtime Governance
= AmberPoint

AgilePoint - - process design in Visio; VS integr for devs; own exec runtime
X - optimization. Builder sux. Integr not as visible.  Very nice Case Manager piece.

Application generator solutions
= K2.Net from [connectors to MS bits; solid SDK story; nice auto-form and switch capability]
XX MetaStorm

Decision Management
X Blaze Adviser [web - normal or tech - version and control; optional diy forms]
= InRule author/repository/runtime/client SDK - sensible discussion of perf
XX RuleBurst [capture in word - this is the Centrelink thing]

Scope of Projects - and OverspendingUse SHIFT+ENTER to open the menu (new window).
8/23/2007 4:31 PMApproved

Everyone wants to achieve good value for money, but overspending is very tricky to prevent.  Standard project management tries to track benefits against features - but quantifying benefits is very difficult. The best practice advice on project governance says that overinvestment is very difficult to manage from the estimating process - it is better to approach it from the reporting and management direction. I don't really understand that comment: I have always seen overspending occuring deep in the details…  Here are the main areas in my experience - what do you think?

1. Prevent "request stuffing"
Request stuffing is the common (and normal) tendency for stakeholders to "require" features 'just in case they might need it one day'. Sometime a prioritization process will drive this out - that’s certainly one of the intentions of our prioritization work in the Scope Baseline.  If that doesn't work, the recommended technique is to squeeze the project, putting the stakeholders in charge of making the tradeoffs. Amazingly, if you empower the stakeholders to make the hard choices, in my expereince they ALWAYS can - regardless of the stories they were telling when you were in charge and trying to do prioritization.

  • HOWEVER: even if the project could afford to do all the silly things the stakeholders ask for, you still should not unless you can be convinced it is value-for-money for the Commonwealth.  Discuss with your manager or the Project Office.
  • BAs want to feel that "the customer is always right" - and need to watch that tendancy…
  • Developers want to do the best work possible, and hence sometimes stuff in requests that aren't needed for the project.

2. Prevent "gold plating"
Gold plating is the technical team doing stuff they think is cool but which has low business value. Typically it is "within scope", but it the expensive version of the functionality.  This is not always a bad thing: developers can get a lot of job satisfaction out of it.  The recommended approach is to recognize that there is a 2 hour, 2 day, and 2 week version of any function: and make the decision about which one is right for this project more explicitly.

  • Developers want to do the best work possible - except in areas they find boring, which they want to do as quickly as possible.  You need to monitor them.
  • Pressuring developers to deliver faster does NOT get them to remove gold plating: they usually think it is necessary work until someone takes them through the decision making process.  A good question to use: "Is there an easier way to do this that would be good enough for this project?"
  • Design for an appropriate level of robustness and scalability, not always the best we can do. Note that organisational standards might NOT be appropriate for your project: make a cost-risk assessment, not just an assumption.
  • Don't hand craft if you can avoid it: while in some cases developers will try to convince you it is faster than relying on tools, it has significant additional maintenance costs.

3. Prevent "refocusing"
Typically projects are authorised by senior managers who have particular strategic objectives, but requriements are largely specified by middle managers who have different - usually more operational - priorities.  The result can be a system that makes operations easier, but which fails to support the strategy: and that is both a failure and a large overspend. Note that requirements that do not lead to the goals of the project are called "incidental". Having incidental goals isn't a bad thing - when they can be accomplished very easily along with the primary goals - just don't let the incidental goals drown out the actual purpose of the project. Keep your focus (and the project's) on the approved purpose and benefits, and treat incidentals as "nice to have".  Typically, a project should try to work through any conflict by explaining the project's purpose and benefits, but if the conflict continues, get the senior manager confirm the purpose and benefits directly with their people [or change the project's scope to encompass the other goals]. Also watch out for technical team members trying to refocus the project based on their prejudices...

4. Don't over document

  • Some documentation is temporary - for the work in progress - and should be abandoned after it has served its purpose. Other documentation has ongoing value, and needs to be maintained. Make the differentiation explicitly, and ensure your team doesn't maintain documents past their usefulness, and doesn't allow useful documents to get out of date.
  • Don't type out material pointlessly:
    • modern tools can generate some information on demand. 
    • if the material is handdrawn (on a whiteboard or paper) consider scanning rather than redrawing or typing out: or - better - record it electronically the first time.
  • Try to avoid having information in multiple places, since it makes so much work keeping it all up to date and synchronised.  Cross reference, don't duplicate.

5. Done is a Binary state.
All work has to go to a completed state - not a 90% state, a completed state.  Experience shows that work that is "almost done" has a 95% chance of being less that half done. If you focus on actually completing individual tasks, you get a feeling of progress, not 'SSDD' [same shit, different day]. Further, both studies and experience show there is less rework when things are progressively completed, rather than 'almost done' and then reworked forever.

Types of Tasks and Chief Programmer TeamsUse SHIFT+ENTER to open the menu (new window).
8/23/2007 4:29 PMApproved

Three types of tasks:

completely dividable
Laying a brick wall after the bottom row has been laid: you can throw as many brick layers as you want onto the job - each works on an independant part.

(interface is implicitly but completely defined by the shape of the bricks and the bottom row, although there may be a laying pattern...)

[not really completely, since there is a diminishing return - every meter or two?]
[what about adding a helper to pass the bricks?]

not dividable
- pregnancy ("making a baby" aint right)
- baking a cake
- growing a tree
- learning something (but could predigest?)

partially dividable
: may require extensive communication between the people among whom the tasks are distributed : more people == more lines of communication => inefficiences of scale

[optimal team size peaks at 5: above that each person costs more than they add to the work time!]

Note that "Chief Programmer Teams" are an attempt to limit the inefficiency of scale (and to recognize the overwhelming efficiency of a few peak performers). In a CPT, the head knows everything, and has team members doing most of the work. There isn't much communication between team members: just between them and the head.

Agile Modelling and Big Requirement Up FrontUse SHIFT+ENTER to open the menu (new window).
8/23/2007 4:28 PMApproved

I have just finished a little review of "Agile Modelling" whitepapers. Most of the content is the same old agile story about doing unrealistic things (multi-skilling developers, training up business users to do techie work, having so much business user time on the project that they can do some of the technical work, etc etc).  There are a few important gleanings, though, in the form of their statement of the business case for Agile Modelling - an attack on BMUF (big modelling up-front).

  • A 2001 study by M Thomas in the UK of 1027 projects found that scope management issues were cited as the primary cause of failure by 82% of failed projects.
  • Jim Johnson of the Standish Group: when requirements are specified early in the lifecycle 80% of that functionality is relatively unwanted by users - 45% of the features are never used, 19% are rarely used, and 16% are sometimes used.

[I don't credit the individual papers since I have boiled their ideas together]


  • when project stakeholders are told they need to get all their requirements down on paper early in the project, they desperately try to define as many things they might possibly want as they can. They know if they don't do it now, then the "change prevention process" will make it very hard to get them added later.
  • things change between the time the requirements are defined and when the software is actually in use.

[Note: several pieces of the criticism doesn't apply to us: Since we don't have years between spec and live date - only months, and we already (in the new process) do detailed requirements by increment. But the issues intuitively seem as if they might impact us, so lets learn what we can from them.]

  • Make the change process more cooperative.  For example, put the stakeholders in charge of prioritizing things, and hence deciding what is in and what is out, in a more timebox-related approach (with IT just being their adviser). Recommendation: change its name to "change facilitation process"
  • Make sure the stakeholders know the change process will be under their control before you gather requriements - otherwise they will load up the wish list unreasonably.
  • Permit some uncertainty - allow the stakeholders to say: we either want X or Y, and we will decide by <date>.
  • Reduce the time between the baseline and the live date - finalise the baseline later, and release sooner (which probably means in smaller increments). Stay in touch with the users in the meantime, and watch for needed change. Nothing is more expensive that building the wrong system.
Change managementUse SHIFT+ENTER to open the menu (new window).
8/23/2007 4:24 PMApproved
Let's call the old process what it was: "change prevention". And create a new cooperate activity for "change facilitation".
I would rather support refinements during the process than have stakeholders overspecify ("don't forget a kitchen sink - it might come in handy one day") - note the Standish Group finding about features that never got used…

Some degree of requirements change is normal and desirable: lack of change requests suggests either a massively over-specified scope or stakeholder disengagement.

Who decides and how needs to be identified up front, and then actually implemented by the requirements engineers and PMs as they interact with stakeholders. (ie. key stakeholder wins, PM chooses, squeaky wheel, etc etc).   If we are requiring the stakeholders to reach consensus, then you don't really have any control over the project schedule!

I like to give ownership of change to stakeholders, so it changes from being a "change prevention process" to a "change facilitation process".  Cf. "Let the customer own both the scope and the schedule, and just act as an expert adviser for them to make the decisions with an understanding of the risks." - IT only needs to own the estimate of effort, and the process.

(Said with funny accent) 'Requirements aint requirements' - requirements get MoSCoW prioritization. Changes are the same.  Just because it comes in on a change request doesn't mean it is a MUST.

Once a change is agreed to, how/when it goes into the timbox/increment cycle is a different decision.

What do I do if there are just too many requirements that are "should" on the MoSCoW scale? How can I defend some choices?  Make up a "detailed prioritisation decision matrix": rate benefit and penalty on a relative scale of 1-9, add them together to get Relative Value. Rate Relative Cost and Relative Risk. Convert all three to percentages.  Priority = value / (cost + risk)

A good project will always have some unsatisifed "requirements", where there just wasn't enough comparative value to do them - if you did everything, you probably wasted a lot of money.

I am not very interested in getting stakeholders to sign-off on requirements. I am far more interested in getting them to sign-on to a particular baseline.
Notes from Software Requirements by Karl Wiegers, 2edUse SHIFT+ENTER to open the menu (new window).
8/23/2007 4:22 PMApproved

Cosmic Truths About Software Requirements

#1: If you don't get the requirements right, it doen't matter how well you execute the rest of the project.
[requirement validation might be a good time to grab some user acceptance criteria: the testers will love you!]

#2: Requirements development is a discovery and invention process, not just a collection process

#3: Change happens
(and our objective is to manage that change in the stakeholders best interests, not inhibit change)
(Capers Jones says 3% per month during design and coding - for our projects, maybe 5%?)

#4 The interests of all the project stakeholders intersect in the requirements process.
(they will conflist: figure out how to resolve those conflicts up front - options in ITAB
a) key stakeholder wins
b) ITAB PM decides
c) majority vote
d) squeaky wheel
e) lowest cost/risk option

#5 Customer involvement is the most critical contributor to software quality.
[If we screw it up, you will spend quite a lot of time explaining to us what we did wrong. How about investing a fraction of that time upfront so we can get it right?]
[product champions for each user class; use prototypes]

#10 You're never going to have perfect requirements.
[aim for good enough, and then emplace a baseline and change control]

Build & BurnUse SHIFT+ENTER to open the menu (new window).
8/23/2007 4:21 PMApproved

Microsoft calls it a "Smoke Test", not burn.

A great primer is McConnell's Rapid Development (Chapter 18 - Daily Build and Smoke Test) - its in SkillPort. (,test)

From my experience:

A) Create an automated build process. No matter how much extra work it is, this has to be EXACTLY repeatable or strange things happen. Automate it, and maintain it.

B) Create an agreed set of regression tests that are automated  - again, it has to be EXACTLY repeatable. 

    B1) Have a harness that can call unit tests, and log their success/failure.
    B2) I strongly encourage getting the testing guys to automate some tests for you too - their Mercury tools can do cool things in this space.

C) As the code extends, the regression suite has to as well. 

    C1) I usually have the Dev Lead responsible for ensuring this happens
    C2) I usually have an automated measure of how much of the code was executed by the smoke test, and watch to make sure it doesn't drop unexpectedly. [Apparently a code coverage capability ships in VS 2005, but I haven't used that one.]  I program this into the test harness.

D) The Smoke test result should be visible to everyone with an interest in the project - make it an agenda item in every meeting - usually a one liner (either no smoke seen, or "so&so broke the build"). I usually have the test harness automatically email the results to everyone too.

E) People keep arguing with the "Daily" nature.  If it is all automated, you press a couple of buttons, and it runs while you make a coffee. The reason it is daily is to find exactly what introduced the problem: the changes to the build in a single day are small enough to find the problem very quickly.  If you leave a week or a month between smoke tests, figuring out what went wrong is a much more major (and contentious) issue. Trust the best practice advice, and do it daily until you have experience with the technique (by which time people usually don't argue any more).

F) The smoke test runs of the automated build, which is made from checked in code.  You need to keep developers checking stuff in regularly. This needs to be managed and monitored by the dev lead and the PM.  Developers will tend to keep their stuff out of the build 'just a bit longer'.  My standard is 2 days: every task is small enough to be completed and checked back in within 2 days. Then in exceptional circumstances that I am managing, a week might not be too much risk.  But if your standard is weekly, how long do the exceptions go for? There is a lot of risk if the time period between being back in the build is too long.

G) The test harness is a reusable resource - an investment.  Don't skimp on it.  Make building and maintaining the harness an explicit part of the project - just like unit testing.  There are different views on whether it should be the same harness the developers use for running unit tests - and good arguments on both sides.

H) "Pragmatic Unit Testing in C# with NUnitThe Pragmatic Starter Kit, Volume II" (in SkillPort) makes the point that "smoke tests" usually only check that the code made it that far, without the application exploding. It is much better to check that the data is actually right. 

    H1) This requires you to reset the test data at the start of the smoke test run.  This is straightforward to build into the harness: you just gotta remember to do it.

    H2) Code walkthroughs should review unit tests too, and ensure they are quantitative, not just Assert.IsTrue(true).

Testing that the code executes is not - IMHO - an adequate test.

Application Development GuidelinesUse SHIFT+ENTER to open the menu (new window).
8/23/2007 4:21 PMApproved

Next level down:



1.       Every piece of code shall be:

a)        Related back to the requirement(s) it satisfies, via the design

b)        Designed before it is written

c)        Subject to peer review

d)        Exercised by code-based unit tests

e)        Exercised by requirements based testing

f)          Exercised by sociability/scalability testing

2.       Purchased applications shall pass through design to plan how they can be used to fulfil the requirements, and which features of the application will be used, and testing to confirm that they do in fact do so – they may not require any development work, and that is the only stage they may skip.


Design: (Designers take the business requirements as an input, and create a technical plan for the realisation of those requirements.)

3.       A project must pass a quality gate before passing elements into design. That gate involves adequate requirements.

4.       Standard outputs from the design process shall be : 

o        RBB SWOT Analysis: [CSA’s preference is REUSE first, BUY second, and BUILD third.  The Designer must conduct a SWOT analysis on existing investments that could be reused, complete and partial options for purchase, and building, and provide that advice to the Application Architect.]

o        Application Diagram [partition the system; choose patterns]

o        System Diagram [interfaces - web services, classes]

o        Deployment and/or Logical Datacenter Diagram [what goes where. May have different ones for Dev and Prod]

o        Performance Model [quality of service, workload, performance objectives - risk assessment]  

o        Threat and Vulnerability Model [to the product and to the project]

§         Security risk assessment: (cross ref to security guidelines) 

§         Requirements risk assessment: The time between the stakeholders signing off on the requirements and the project delivering code for them to accept (UAT) must be as short as possible to prevent the requirements from changing significantly.  Any time this gap is more than 6 weeks it shall be handled as a requirements risk.   Any project with a high degree of stakeholder conflict about the requirements shall be handled as a requirements risk.  If the project has a high degree of requirements risk, then it shall be so identified to the BAT, and they must be consulted on the plan to handle that risk.  Such plans might involve, for example, mechanisms like mock-ups and iterative development.

§         Technology risk assessment: Any technical project feature that CSA has not developed before shall be handled as a technology risk. This is particularly true for any project that will use non-standard security approaches.  If the project has a high degree of technology risk, then it shall be dealt with using mechanisms like prototypes and proofs of concept. It shall be identified to the Architecture Board as a project with technology risk.

o        Plan for Mockups / Risk-Reduction Prototypes [mitigation steps for risks, as necessary]

o        Phasing plan for technical work: dependencies,

o        List of development tasks [with (a) Priorities (b) Risks (c) Integration Sequence]  - may be done by the dev lead in the dev stage. 

5.       Applications shall use group-based security. Mapping users to groups happens in the central security store, and should not be reproduced in applications.

6.       Reporting is an important part of every application: applications shall provide their own reporting, leveraging the common reporting infrastructure. Business will specify the data that must be available for the reports.  Our preference for the technology used to build reports are (a) suitable for self-service creation/amendment (b) assembled (c) hand coded.

7.       All design work shall be peer reviewed, and be subject to approval by the Applications Architect.



8.       A design must pass a quality gate before passing into dev.  That gate involves the design being approved by the Application Architect.

9.       Developers shall implement the approved design, and escalate any proposed changes back to the designer.

10.   Developers shall work according to the coding guidelines appropriate for that development environment.

11.   Code should emphasise the following values: SIMPLICITY, READABILITY, REUSE.

12.   Developers shall honour the UI freeze, to allow help and doco to be created, and follow the approved process when the freeze has to be violated.

13.   Developers are responsible for testing that their code does what the designers intended: function, scale, sociability.  

14.   Before being checked into the source code control repository, code shall be assessed for complexity and other metrics, and those metrics stored for reporting. 



15.   A project must pass a quality gate before passing elements into test. That gate involves adequate requirements, an approved design, code that meets set quality metrics, and has been signed off as meeting the requirements.

16.   Testers shall check that the code does what the requirements ask for. Testers may raise defects against the requirements, particularly where the requirements are not internally consistent, but also to verify anything that seems ‘strange’.

17.   Testers shall confirm that the product will handle expected loads.

18.   Testers shall confirm that the product conforms to CSA standards (e.g. usability, accessibility)

19.   Testers shall confirm that the help and doco is accurate.

20.   Testers shall assist the stakeholders to confirm that the product does what they wanted. (UAT) 

21.  Testers shall produce guidance for ITM on what testing is necessary to confirm a successful installation into Prod. 




Ideas for products that need to be created by Applications Architect to support these standards:

  • Samples/guidelines for the design deliverables 
  • Coding guidelines per development environment
  • Usability standards 
  • Guidelines on using a common reporting infrastructure
  • Load testing guidelines
  • UI freeze process and violation process
  • Standards for online help and doco (including splitting responsibility between business and IT)
PMBOKUse SHIFT+ENTER to open the menu (new window).
8/22/2007 5:35 PMApproved

There is a limited download available of PMBOK, for personal use only.

The full thing has to be purchased.

The CD version is available from - it is USD$50

(paperback is the same price - but 31.50 from Amazon)

Business Cases - Compliance with PolicyUse SHIFT+ENTER to open the menu (new window).
8/22/2007 5:27 PMApproved

AGIMO continue to make further advances with the ICT Investment Framework, and its underpinning resources. While <we> only need to use these tools in submissions to Cabinet, it would be wise to take them into account in the continuous improvement of our processes. In my opinion, Architecture should be asking for variances from AGIMO policy to be expicitly identified and justified, so we can consider whether they should operate under a dispensation.

Recently finalised and released are the Business Case Tools, which are supported by AGIMO-funded training.

(There are appropriate options inside the materials for projects as small as $2million.)

Addons we should consider for Sharepoint 2007 (circa Aug 2007)Use SHIFT+ENTER to open the menu (new window).
8/22/2007 4:50 PMApproved

User experience

SharePoint allows users to send an email with a link to a single item in a document library. There is a free add-in which allows for multiple items to be linked.


Outlook functions available in Outlook 2007 but not Outlook 2003 when used with SharePoint: When using Outlook 2003 the built-in Calendar/Contacts integration is one way – SharePoint to Outlook. You cannot make changes in Outlook and send them back to SharePoint. Solution: train the users. This will also be resolved once CSA installs Outlook 2007


SharePoint Tasks do not integrate with Outlook Tasks. Tasks allocated to a user should sync to Outlook, and any updates made in Outlook should sync back to SharePoint. Solution: a tool is available to sync between the two applications


As part of the document management services the saving of emails from inbox or sent items into SharePoint is a multi-click process that requires a high amount of overhead when a user has have to do it a lot. There is a vendor utility to make it easier, or we could write a custom button for Outlook.

< ECHTRA&spec=0&pid=0&vid=0&country=&SearchType=P>


Default SharePoint search is across every site, file and data store mapped to SharePoint. It is suggested that a refined tool framework would simplify the user experience and make it more likely that they would find the information quickly. < ECHTRA&spec=0&pid=0&vid=0&country=&SearchType=P>


SharePoint’s authoring paradigm assumes serial single document creation and meta tagging. As such there is no out of box capability to author, meta tag and categorize large amounts of documents into SharePoint. Solutions are available at:


Administration capabilities

SharePoint lacks the ability to review multiple site configuration which would help with debugging of errors. Proposed solution:


SharePoint has a cleanup utility to remove dead hyperlinks that it has generated. However, when users have manually created links into SharePoint content, these links are not cleaned up. There are a number of software solutions, ranging from "tagging" dead links, to removing them. Some mechanism to find such links is highly recommended.


Although it is built on Microsoft's SQL technology that has good support for replication, SharePoint is based around having everything in one central store which may create issues with remote sites with poor bandwidth. Solution: a replication service is available.


Migration of development and test areas to production. SharePoint provides no capability to automatically and easily replicate site and area changes. These are relevant in a couple of areas – migrating "SharePoint-based-applications" between environments, and provisioning something to every existing branch site, for example.


echo for SharePoint provides rich features that allow you to easily replicate and deploy changes to web parts, SharePoint sites, settings, areas, lists, libraries, permissions, views, fields, alerts, metadata and more. echo handles content and configuration. For moving just content from dev-test-prod, a content manager is available


Install NetMeeting in a locked-down environmentUse SHIFT+ENTER to open the menu (new window).
8/17/2007 3:01 PMApproved
First, we install NetMeeting
1. right click on your desktop, go to New, and choose shortcut
2. paste the following into the target box
3. Press OK to close the shortcut box.
4. You will now see a new shortcut on your desktop - double click on it.  This will run the installer for NetMeeting.
Say No when it asks for a Directory.
The install will put an icon for Netmeeting on your desktop.
Using Netmeeting
See its help.  The main features are sharing an application or the whole desktop, chat, whiteboard, and sending files. Before you can do anything else, you need to establish a connection - like getting someone on the phone before you can talk to them.
Because <your org> doesn't use this service, there isn't a "NetMeeting directory" for us to use. So we will need the person at one end to let the person at the other end know their current number.
To start, the person on one end needs to go Help/About , and note down the IP Address at the bottom: It may be different from day to day (it changes randomly).
They send that IP Address to the person at the other end - its like letting them know the phone extension of a conference room you are in.
The other person then starts a call, and puts the IP Address in the "To" field.
From there, follow your nose.
Setup for being mobileUse SHIFT+ENTER to open the menu (new window).
8/17/2007 2:30 PMApproved
What you need is to convince VirtualPC to put the VSV file (the saved state) onto the portable hard drive. The way to do this is to create an environmental variable setting for Virtual PC on the host computers which point to a new location for the My Virtual Machines folder (by default, it goes to My Documents). For example, on my machine, I set the variable to G:\. To do this, follow these steps:
  1. On the host computer, right-click My Computer, and then click Manage.
  2. Right-click Computer Management (local), and then click Properties.
  3. Click the Advanced tab, and then click Settings under Environmental Variables.
  4. Under System variables, click New.
  5. In the Variable Name box, type myvirtualmachines.
  6. In the Variable Value box, type the path of the folder (etc) that you want to use.
  7. Click OK two times, and then close the Computer Management window.

If you already have files in the My Virtual Machines folder, move them to this location.

Now it won't matter if the portable hard drive has different drive letter assignments on different computers: as long as the environment variable is right for that machine, it will all work.

(Note: this use of environment variables replaces the Connectix approach of using registry values.)

TIP: Super charge your search of MSDN, using Google Use SHIFT+ENTER to open the menu (new window).
8/17/2007 2:28 PMApproved
Annoyed searching MSDN takes several clicks?  Here is an alternative. You can search google, specifying the scope as the msdn site!  In a google search box, you would type: <search terms>
Now, let's make that easier - I have made the following registry hack:
        [HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\SearchUrl\MSDN]
Now all I have to do is type in the IE address bar
        MSDN <search terms>
and I get a google search for those terms with a scope of MSDN.
(If you would like that too, save the attachment to your desktop, rename it to end with .reg, and merge with your registry.)
Make VS.Net IDE open faster Use SHIFT+ENTER to open the menu (new window).
8/17/2007 2:24 PMApproved
To hasten the startup of the Visual Studio .NET IDE, get rid of the Start Page. Because the Start Page requires all Web-browsing components to be loaded, you can chop off a considerable amount of startup time by skipping it. To turn off the Start Page, select Options from the Tools menu to bring up the Options dialog. In the Environment/General property page, select Show empty environment in the at Startup combobox.
Speed up VS.NetUse SHIFT+ENTER to open the menu (new window).
8/17/2007 2:24 PMApproved
Consider turning off tracking the active item in the Solution Explorer window. This will keep the Solution Explorer selection from bouncing all over the place when working on different files in a project. From the Tools menu, select Options. In the Options dialog, select the Environment/Projects and Solutions property page and uncheck Track Active Item in Solution Explorer.
Adapt UI to the WXP theme Use SHIFT+ENTER to open the menu (new window).
8/17/2007 2:23 PMApproved
The look of any Win32 application can be adapted to the current theme without writing any code. All that's needed is an XML manifest file with the same name as the executable, with a .manifest extension. Amazingly, the same XML manifest file can work unchanged with any executable on that machine, but you might want to customize the product and copyright information.
For themes to work, Win32-based applications should make a call to InitCommonControls before using common controls. While this is normal for C++ applications, Visual Basic programmers will have to adjust their thinking. So if you are writing applications with Visual Basic for Windows XP, bear in mind that if you don't call InitCommonControls at the very beginning of your app, it won't exploit Windows XP visual styles. NOTE: This means it might 'just work' with an existing application (if it already calls InitCommonControls, all you do is supply a manifest file…) Instant UI upgrade, without even recompiling.  I tried this with one of my Win2K projects, and it worked!
In addition to standalone files, manifests can be embedded in the application as a resource of type RT_MANIFEST. The DLL loader module knows how to look for and deal with manifests.
Finally, you can get the theme to apply to WSH scripts - just copy the XML manifest file into the System32 folder on Windows XP, then rename the file wscript.exe.manifest.
Example Manifest File
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
<description>Your application description here.</description>
        processorArchitecture="*"   />
Saving PDA Battery PowerUse SHIFT+ENTER to open the menu (new window).
8/17/2007 2:22 PMApproved
While written for Smartphone 2004, not PocketPC, this app is interesting: it turns off bluetooth until and incoming phone call arrives, and turns it off after the phone call. - presumably we could get something like that for our bluetooth stuff...
Wireless Networker - this utility allows the PPC to reduce wireless power consumption when its not being used - results are remarkable.
Extend battery life by choosing mild, non-contrast colors to low down screen power consumption. - provides a neat utility to change the themes…
PocketPC Screen Saver saves signifcant battery power by shutting down the power from the screen and touch-sensor a few minutes after the screen was last touched with the stylus. The device itself is not powered off so that your applications are still up and running.
Pocket Battery Analyser helps to identify the power drain impact of using different features/etc (but doesn't DO anything) -
Creating eBooks for Microsoft ReaderUse SHIFT+ENTER to open the menu (new window).
8/17/2007 2:22 PMApproved
Microsoft DO provide an end user tool for creating eBooks: the free Read in Microsoft Reader (RMR) add-in for Microsoft Word 2002. See <>
They also provide a free Content Software Development Kit.  You have to manually convert documents (HTML; Word processing formats; Desktop publishing formats) into Open eBook (OEB) compliant tagged text (the kit provides a Markup Guide that explains the OEB compliance requirements and includes samples to work from), and then you pass the OEB-compliant content files through Litgen.dll via your conversion tool.
This tool is really intended to allow third parties to create conversion tools...
The leading commercial product is the Readerworks range: ReaderWorks 2.0 accepts Word Documents, HTML, OeB package files, Text (ASCII) and images (JPG, GIF, PNG).
ReaderWorks Standard is a free application for building Microsoft Reader eBook titles for personal or non-commercial use. ReaderWorks standard provides all the tools to build a Microsoft Reader eBook, but does not contain the tools for customization or commercial distribution of a title. ReaderWorks Standard is the perfect tool for a teacher, business user, or author who wants to develop and use their document, report or book in Microsoft Reader format.
ReaderWorks Publisher is a desktop software application designed for most users seeking to customize their Microsoft Reader eBook title for commercial purposes. ReaderWorks Publisher contains all the tools to develop the eBook title, as well as features to add cover art, cover page information and prepare the title for commercial sale. (RRP USD$119)
See <>
Other interesting Notes:
Free online service:
Ebook Express: Upload your files: they make into eBook and you download. Free.  Powered by Readerworks technology...
Reader Ebook Wizard allows you to publish text and HTML documents as an MS Reader e-book. It provides a simple step-by-step wizard, that lets you specify the details of the publication, select the files to be included and publishes/compiles everything as MS Reader e-book. You can create e-books from scratch or from an existing .opf package file. It supports css, htm, html, txt, xml as well as bmp, jpg, jpeg, gif and png. <>
OverDrive Connect allows users browsing a web page to making eBooks from them (it you pay to connect your site so all users can get this feature)
TextCafe from Texterity can take input formats and turn into OEB -
nice overview
Speed up the Start Menu Use SHIFT+ENTER to open the menu (new window).
8/17/2007 2:21 PMApproved
Speed Up the Start Menu
This is an old tweak, it has been around since Windows 95, and is still alive today! If you think your Start menu could respond a little quicker try this:
Start the Registry Editor
Go to HKEY_CURRENT_USER \ Control Panel \ Desktop \
Right-click the String Value MenuShowDelay, and select Modify
Change the Value data (0 is fast, 400 is default. These are all in miliseconds)
When ready, press OK and close the registry editor
Log off, or restart Windows for the changes to take effect
Config My Computer to show username on machine name Use SHIFT+ENTER to open the menu (new window).
8/17/2007 2:21 PMApproved
Start the Registry Editor
Go to HKEY_CLASSES_ROOT \ CLSID \ {20D04FE0-3AEA-1069-A2D8-08002B30309D}
Right-click the value LocalizedString, and choose Rename, rename it to LocalizedString.old
Select New > Expandable String Value from the Edit menu
Name the new REG_EXPAND_SZ value LocalizedString
Right-click the LocalizedString value and choose Modify
In the Value Data: box enter %USERNAME% on %COMPUTERNAME%
Exit the registry editor
Click the desktop and press the F5 key. This will refresh your desktop, and rename the "My Computer" icon
To reverse, just delete the new LocalizedString value, and rename LocalizedString.old to LocalizedString.
1 - 100 Next