• NEW Data Packs Version 2

    You will find attached to this article the power point slides used in the Data Pack Version 2 workshop.

    If you have any questions add a comment to this article and I will try my best to answer them.

    Thanks.

  • NEW Architecture Overview

    The architecture overview was very much freestyled, however attached are the architecture documentation I put together and the diagrams I came up with during the session.  I have also included the notes I made around the differences/advantages/drawbacks.

    If you have any questions about any of this, give me a shout and I will go through it with you.

  • NEW Architecture Overview

    The architecture overview was very much freestyled, however attached are the architecture documentation I put together and the diagrams I came up with during the session.  I have also included the notes I made around the differences/advantages/drawbacks.

    If you have any questions about any of this, give me a shout and I will go through it with you.

  • NEW Introduction to SignalR & WebAPI

    These are two technologies that are now considered part of ASP.NET, however they are not really related any more than that.  I made the point at the start of the workshop that I have included them together here just because they are two cool technologies which I recently used for the first time on the CEO Selfie website and wanted to share my limited knowledge so people can consider using these in future.

     

    The PowerPoint file used in the workshop is included below.  Please let me know if you have any questions, or you would like pointing in the right direction of where we used it in the CEO Selfie solution.

  • NEW Data Packs Version 2

    You will find attached to this article the power point slides used in the Data Pack Version 2 workshop.

    If you have any questions add a comment to this article and I will try my best to answer them.

    Thanks.

  • NEW Introduction to MVC

    This workshop was an introduction to MVC and how we have used it in the recently developed Registration project.  Giving the App Support team in particular a look at how it is structured with a view to understanding and supporting it.

    PowerPoint file used is attached, if you have any questions please let me know.

  • Introduction to SignalR & WebAPI

    These are two technologies that are now considered part of ASP.NET, however they are not really related any more than that.  I made the point at the start of the workshop that I have included them together here just because they are two cool technologies which I recently used for the first time on the CEO Selfie website and wanted to share my limited knowledge so people can consider using these in future.

     

    The PowerPoint file used in the workshop is included below.  Please let me know if you have any questions, or you would like pointing in the right direction of where we used it in the CEO Selfie solution.

  • Architecture Overview

    The architecture overview was very much freestyled, however attached are the architecture documentation I put together and the diagrams I came up with during the session.  I have also included the notes I made around the differences/advantages/drawbacks.

    If you have any questions about any of this, give me a shout and I will go through it with you.

  • Employee Surveys from website to WIT

    In the latest version of WIT we have switched from using a traditional relational database to using SQL Server Analysis Services to calculate the scores displayed on the charts.

    When employee surveys are completed by clients the answers are saved in the traditional relational database. From here they go on a journey until they finally fulfil their potential as a chart in WIT.

    That journey goes something like this:

    1. They start out in our relational database as a set of rows in a series of tables.

    Our primary database structure has several legacy complexities which are not really relevant to WIT, such as employee surveys contributing to List and Accreditation entries.  Only one of these entries will be available in WIT.

     

    2. Next stop on the journey is the Data Warehouse.  The Data Warehouse is a half way house between the strict relational model used by the primary database and the OLAP structure needed by Analysis Services.  When moving data from the primary database to the data warehouse we remove duplication (only the primary entry is taken over), remove irrelevant information (rows and columns) and assign surrogate keys to identify the records inside Analysis Services

    There is a script in the Data Warehouse project which contains the logic for taking data from the primary database and moving it into the Data Warehouse.

    3. The cube is defined and populated based on the tables and views in the data warehouse.  Tables in the data warehouse relate to either Fact or Dimension tables in the cube.  A Fact table is some data that we are going to want to aggregate while a Dimension is some data that we are going to want to filter or display based on.  When the Cube is processed it pulls the required information from the data warehouse and does any necessary pre-calculations.

     

    On a nightly basis an Integration Services job runs which checks if a cube rebuild is required and if it is will automate running through the steps described above.

     

  • WIT Features Workshop - WIT Presentation Builder

    Although the workshop I conducted was primarily about the new features which had been recently released, part of it featured an explanation about the WIT Presentations process.

    I have added the presentation slides which go some of the way to explaining how the presentations are generated in WIT.

  • Providing progress updates

    Deciding how to handle partially completed tasks when you are going to be out of the office is something which crops up fairly regularly. Providing an update to your manager or other interested parties before leaving is always a good idea.

    This might seem fairly obvious if you are going on annual leave for a couple of weeks, but what has become apparent recently is that making sure work could be continued in your absence is something we should be doing on a daily basis rather than just before annual leave. This may seem excessive but consider a scenario where illness or technical issues (PC failure) prevent you from continuing your work the next morning. Maybe you work on a laptop and it is lost or stolen.

    To mitigate these risks there are a couple of simple, pain free things we can do:

    1. At the end of each day, take a couple of minutes to update the ticket or work item (user story or task) with your progress. What has been completed and what is left to do. This should help you when you return the task the next day as much as it would help someone else looking to pick the work up.
    2. If working on a task in TFS, update the time remaining field. You should have a good idea of how much work is left (in hours) so update this field accordingly.
    3. Create a shelfset for your current work in progress. Even if you are not ready to check it in or submit it for review you can create a shelfset back up of your work. Shelfsets are stored on the server, protecting you from local hardware issues or loss of laptops. It also means that if you have taken your laptop home with you we can still access the latest development version of the code.
    4. If you know you are going to be out of the office (annual leave etc) then it is always worth dropping your manager an email with a brief summary of what you're working on and where you are up to. If your support tickets are unsolved, it’s also good practice to communicate the summary to Samantha to ensure your tickets can be actioned or picked up in your absence.

  • Task and check-in sizes

    Task escalation and large shelf-sets is something which we've all encountered at one point or another. As developers we have a tendency to get into a task and just keep writing code until it is finished, without ever taking a moment to step back and see if there is a more modular approach we can adopt.

    There are several reasons why we should be aware of this situation and take steps to mitigate it.

    1. Think of the reviewer! I was asked to review a new chart for WIT. The development had started a couple of months earlier and by the time it got to review there were 51 files in the changeset.

    Attempting to review a changeset of this size and complexity is very difficult and very time consuming. The biggest problem with this is the complexity and expecting the reviewer to be able to keep track of all the changes you've made, their implications and anything you might have missed. It is simply too big to be reviewed with any confidence.

    2. Merge hell. The longer you go without checking your work into the depot, the more likely you are to encounter merge issues when you finally come to check it in. When you get merge issues, in the best case you need to work out what additional testing you need to perform. In the worst case you have to rewrite some of your change to accommodate changes made by someone else.

    So, what can we do to mitigate these problems? Well, essentially things we should be doing anyway: plan and design your change before you start writing any code. Before you start writing any code, you should have a clear idea of all of the changes required. To mitigate the risk of producing giant changesets, simply look at the list of changes required and break it down into manageable chunks.

    For example, consider the changeset shown above to add a new chart to the executive summary in WIT. Rather than create a single task titled "Create Chart" as happened in this case, think about the steps involved in generating a chart:

    1. Get the chart data out of the cube
    2. Update the domain logic to determine if we need to get the chart data for this chart
    3. Update the application logic to handle a request for this chart
    4. Update the UI to request and display this chart

    Each of these tasks could be tackled in isolation. Some of them can even be done independently meaning you would not end up waiting for a previous changeset to be reviewed.

    Probably the easiest way for you to monitor this for yourselves is to impose a maximum time between check-ins. I would say that you should be aiming to check something in twice a day and should definitely be committing something once a day.

    With splitting tasks down there is of course a danger that you will end up waiting for something to be reviewed before you can carry on with a particular task. If everyone is following this practice, and you get into the habit of actioning all review requests when you submit a piece of work, then we should have plenty of throughput on the reviews minimising any hold ups.

  • NEW Data Packs Version 2

    You will find attached to this article the power point slides used in the Data Pack Version 2 workshop.

    If you have any questions add a comment to this article and I will try my best to answer them.

    Thanks.

  • NEW Architecture Overview

    The architecture overview was very much freestyled, however attached are the architecture documentation I put together and the diagrams I came up with during the session.  I have also included the notes I made around the differences/advantages/drawbacks.

    If you have any questions about any of this, give me a shout and I will go through it with you.

  • NEW Architecture Overview

    The architecture overview was very much freestyled, however attached are the architecture documentation I put together and the diagrams I came up with during the session.  I have also included the notes I made around the differences/advantages/drawbacks.

    If you have any questions about any of this, give me a shout and I will go through it with you.

  • NEW Introduction to SignalR & WebAPI

    These are two technologies that are now considered part of ASP.NET, however they are not really related any more than that.  I made the point at the start of the workshop that I have included them together here just because they are two cool technologies which I recently used for the first time on the CEO Selfie website and wanted to share my limited knowledge so people can consider using these in future.

     

    The PowerPoint file used in the workshop is included below.  Please let me know if you have any questions, or you would like pointing in the right direction of where we used it in the CEO Selfie solution.

  • NEW Data Packs Version 2

    You will find attached to this article the power point slides used in the Data Pack Version 2 workshop.

    If you have any questions add a comment to this article and I will try my best to answer them.

    Thanks.

  • NEW Introduction to MVC

    This workshop was an introduction to MVC and how we have used it in the recently developed Registration project.  Giving the App Support team in particular a look at how it is structured with a view to understanding and supporting it.

    PowerPoint file used is attached, if you have any questions please let me know.

  • Data Packs Version 2

    You will find attached to this article the power point slides used in the Data Pack Version 2 workshop.

    If you have any questions add a comment to this article and I will try my best to answer them.

    Thanks.

  • Introduction to MVC

    This workshop was an introduction to MVC and how we have used it in the recently developed Registration project.  Giving the App Support team in particular a look at how it is structured with a view to understanding and supporting it.

    PowerPoint file used is attached, if you have any questions please let me know.

  • Introduction to SignalR & WebAPI

    These are two technologies that are now considered part of ASP.NET, however they are not really related any more than that.  I made the point at the start of the workshop that I have included them together here just because they are two cool technologies which I recently used for the first time on the CEO Selfie website and wanted to share my limited knowledge so people can consider using these in future.

     

    The PowerPoint file used in the workshop is included below.  Please let me know if you have any questions, or you would like pointing in the right direction of where we used it in the CEO Selfie solution.

  • WIT Workshop - The basics

    I have attached a presentation to this article which contains a few slides showing you some of what we covered in the presentation and a few of the different types of charts in WIT and what they mean.

    There will also be a video of the session available soon, I will post the link on here so that you can refer back to it at a later date if needed.

    If you have any questions, if there is anything you were unsure of, or if there was something I didn't cover which you would like to know more about please add a comment to this article and I will answer them for you.

  • Architecture Overview

    The architecture overview was very much freestyled, however attached are the architecture documentation I put together and the diagrams I came up with during the session.  I have also included the notes I made around the differences/advantages/drawbacks.

    If you have any questions about any of this, give me a shout and I will go through it with you.

  • Registration in 2015

    I have attached the presentation I used during the workshop so you can refer back to it if needed.

    If you have any questions, if there is anything you were unsure of, or if there was something I didn't cover which you would like to know more about please add a comment to this article and I will answer them for you.

  • Employee Surveys from website to WIT

    In the latest version of WIT we have switched from using a traditional relational database to using SQL Server Analysis Services to calculate the scores displayed on the charts.

    When employee surveys are completed by clients the answers are saved in the traditional relational database. From here they go on a journey until they finally fulfil their potential as a chart in WIT.

    That journey goes something like this:

    1. They start out in our relational database as a set of rows in a series of tables.

    Our primary database structure has several legacy complexities which are not really relevant to WIT, such as employee surveys contributing to List and Accreditation entries.  Only one of these entries will be available in WIT.

     

    2. Next stop on the journey is the Data Warehouse.  The Data Warehouse is a half way house between the strict relational model used by the primary database and the OLAP structure needed by Analysis Services.  When moving data from the primary database to the data warehouse we remove duplication (only the primary entry is taken over), remove irrelevant information (rows and columns) and assign surrogate keys to identify the records inside Analysis Services

    There is a script in the Data Warehouse project which contains the logic for taking data from the primary database and moving it into the Data Warehouse.

    3. The cube is defined and populated based on the tables and views in the data warehouse.  Tables in the data warehouse relate to either Fact or Dimension tables in the cube.  A Fact table is some data that we are going to want to aggregate while a Dimension is some data that we are going to want to filter or display based on.  When the Cube is processed it pulls the required information from the data warehouse and does any necessary pre-calculations.

     

    On a nightly basis an Integration Services job runs which checks if a cube rebuild is required and if it is will automate running through the steps described above.

     

  • WIT Features Workshop - WIT Presentation Builder

    Although the workshop I conducted was primarily about the new features which had been recently released, part of it featured an explanation about the WIT Presentations process.

    I have added the presentation slides which go some of the way to explaining how the presentations are generated in WIT.

  • Knowledge Centered Support Workshop

    The presentation is available here: Knowledge Centered Support

    I've also attached here:

    • the Article Template (.PNG file);
    • The KCS Practices Guide

    Why KCS?

    A brief reminder of the potentially amazing benefits of this methodology:

    Solving tickets should be faster:

    • 50-60% improvement to resolve time
    • 30-50% increase in first contact resolution (this would be cases where we are able to refer ticket requesters to an article)

    Optimal use of resources:

    • 70% improved time to proficiency (getting up to speed: as a new starter or when working on a new or old project, technology or process – for our users and for our team members)
    • 20-30% improved employee retention (yippee!)
    • 20-40% improvement in employee satisfaction (double yippee!)

    Enables Self-Service:

    • This improves our user’s experience
    • Up to 50% ticket deflection (being able to point tickets to articles)

    Builds Organisational Learning:

    • Provides actionable information which can be used to highlight improvements and recurring issues
    • 10% issue reduction due to root cause removal (as above – we will be able to really see where we can potentially reduce tickets by improving a process)

    Please feel free to vote or comment against this, or let me know if you have any questions or thoughts.

  • Providing progress updates

    Deciding how to handle partially completed tasks when you are going to be out of the office is something which crops up fairly regularly. Providing an update to your manager or other interested parties before leaving is always a good idea.

    This might seem fairly obvious if you are going on annual leave for a couple of weeks, but what has become apparent recently is that making sure work could be continued in your absence is something we should be doing on a daily basis rather than just before annual leave. This may seem excessive but consider a scenario where illness or technical issues (PC failure) prevent you from continuing your work the next morning. Maybe you work on a laptop and it is lost or stolen.

    To mitigate these risks there are a couple of simple, pain free things we can do:

    1. At the end of each day, take a couple of minutes to update the ticket or work item (user story or task) with your progress. What has been completed and what is left to do. This should help you when you return the task the next day as much as it would help someone else looking to pick the work up.
    2. If working on a task in TFS, update the time remaining field. You should have a good idea of how much work is left (in hours) so update this field accordingly.
    3. Create a shelfset for your current work in progress. Even if you are not ready to check it in or submit it for review you can create a shelfset back up of your work. Shelfsets are stored on the server, protecting you from local hardware issues or loss of laptops. It also means that if you have taken your laptop home with you we can still access the latest development version of the code.
    4. If you know you are going to be out of the office (annual leave etc) then it is always worth dropping your manager an email with a brief summary of what you're working on and where you are up to. If your support tickets are unsolved, it’s also good practice to communicate the summary to Samantha to ensure your tickets can be actioned or picked up in your absence.

  • No Dashboard Steps Showing on the My Account Page for 2016 Registrations

    With companies that have already registered for the 2016 process (who registered before 10th April 2015), there may be some companies that are not showing any dashboard steps on their My Account page, just like in the screen shot below:



    This will most likely be caused by the company going back to the first page of the new registration process (the Edit Company Details page) and changing their legal status.

    Whilst the company survey record is updated to use the newly selected survey, and existing company survey dashboard steps assigned to that company survey will still be using the dashboard steps from the old survey.

    Whilst this issue has been fixed for all future registrations, any previous registrations which are displaying this issue can easily be fixed by executing the Dashboard.prc_ChangeCompanySurveyDashboardToUseCorrectSurveySteps_update stored procedure, passing in the company survey ID of the affected company survey.

  • Workspaces and Branches on Projects

    Branching is something that is necessary when working on Projects of any real size. We take a copy of the source code from the trunk, creating a branch to work on. The trunk can still be updated and released where necessary and the branch source code is developed in isolation for a period of time, depending on the length of the project.

    We wanted to reduce the amount of time it takes to branch projects and also merge the work between the trunk and branches, so to do this we have branched at a higher level. For the Survey Setup project we now have 3 branches, and only 2 workspace mappings required.

    The image below shows the 3 branched folders and also a main folder under the Team Project where our solution file is located.

    Using multiple Workspaces is something new to us with the Survey Setup project in 2016. Prior to this we had one workspace with all our projects mapped into a single Depot folder on our local hard drive. Below is the new folder structure for the depot.

    The main folder is my old depot, this is one workspace which has everything mapped into it for example the BC_Website folder and all the other projects. The SurveySetup2016 folder is for a second workspace, this has the solution file for the project, and branched copies of all the code we need.

    Why have we used a second workspace?

    • It is easier to manage and maintain. When I have worked on projects in the past the branched code and remained in my main depot for years afterwards, this approach makes it much clearer which workspace mappings you can delete at the end of the project
    • Workspaces isolate code changes between them, so I can do some work in the Main Workspace, then switch to working on the Survey Setup project and the Pending Changes are kept separately. You also do a Get Latest for a Workspace, so you can keep your main workspace up to date but leave your Survey Setup workspace, maybe I’m a day into a big change and don’t want to merge changes in yet
    • Note: Get Latest is for one Workspace. If you are publishing a site to staging for example make sure you get latest on the correct workspace or you'll not be publishing the latest version as you intended.
    • There are limits to how many files can be handled in a Workspace. Some of our users started getting errors from Visual Studio when do a Get Latest. We tracked this down to our Main depot workspace getting too large, so it is good create new workspaces going forward to isolate the work.

    Below are two pictures which show you where you switch between Workspace when working in these Windows.

    An example of the workspace mappings for Survey Setup are below, it says Active to the left of each mapping here which means it will get the code from these folders on the server and put them in the local folder to the right.

    Cloaking

    As we are branching at a higher level now it is possible we are bringing down additional source code and projects that we don’t actually need to do Survey Setup. To stop this happening we can tell the server we don’t want to download the code from paths to our local disk by adding a cloak, in the workspace mappings.

    The below workspace mappings for example will exclude the Public Website from our workspace, which saves us disk space if we don’t need it for this project.

  • What are the workspace mappings for the Survey Setup 2016 Project?

    Below are the workspace mappings for the project, I have got these under a new workspace called SurveySetup2016. It may look like an unformatted messy list, but if you copy and paste these into your workspace mapping window it will populate the fields correctly for you.

    $/Survey Set-up 2016/Branches: E:\Depot\SurveySetup2016\Branches
    (cloaked) $/Survey Set-up 2016/Branches/BC Website/Source/BC_Website/Source/Application.PublicWebsite:
    (cloaked) $/Survey Set-up 2016/Branches/BC Website/Source/BC_Website/Source/Application.PublicWebsite.Contracts:
    (cloaked) $/Survey Set-up 2016/Branches/BC Website/Source/BC_Website/Source/Application.PublicWebsite.Tests:
    (cloaked) $/Survey Set-up 2016/Branches/BC Website/Source/BC_Website/Source/Presentation.Public.Website:
    (cloaked) $/Survey Set-up 2016/Branches/General/Source/DownloadReleaseApps:
    (cloaked) $/Survey Set-up 2016/Branches/General/Source/EventLogSourceWriter:
    (cloaked) $/Survey Set-up 2016/Branches/General/Source/ExchangeWebServices:
    (cloaked) $/Survey Set-up 2016/Branches/General/Source/GeneralTFS:
    (cloaked) $/Survey Set-up 2016/Branches/General/Source/GeneralWinFormControls:
    (cloaked) $/Survey Set-up 2016/Branches/General/Source/MapPoint:
    $/Survey Set-up 2016/Main/Source/Survey Set-up: E:\Depot\SurveySetup2016\Survey Set-up

  • Updating a Companies Survey between Private and NFP

    When changing a companies survey e.g. from NFP BCI 2016 to BCI 2016, you should check that the dashboard steps stay consistent.

    For example if the company completed payment for the initial survey before moving across, the 'Checkout' and 'Complete entry payment' will be filled in against the first survey and may not be copied across to the new one.

    This can be solved in the database by setting a 'Date Completed' in the database against the dashboard steps. Region 67906 in the 2015 Schema completes these steps for rocketfuel.

    Pages such as 'Edit Registration Options' in Back Office > Survey Processes rely on the completion of these dashboard steps to function.

  • List of Copyright Dates Requiring an Annual Update

    These are the locations containing the copyright dates, which require updating every year. Please add to the list below if the copyright location has not already been included;

    MC3 - MainPage.xaml

  • Creating a Pro-Ex account

    To add a new employee account for Pro-Ex staff you should create a normal account for the member of staff but then update the account type to 'Supported by Innovation'.

    To do this:

    1. Create account in back office with an account type of Company Employee
    2. Run the attached SQL script
    3. Ask the user to log into b.co.uk. They should then be able to change their password using the 'Account Information' page.
  • My Code Review Doesn't Have a Shelveset - What Should I Do?

    There may come a time when you come to request a review for some work you have done in Visual Studio, only for Studio to sit there spinning away when trying to shelve your pending changes, seemingly stuck.

    When you go to "My Work" in Team Explorer and view you review, your review looks like the one below with no shelveset attached to it:

     

    You could just abandon the review and create a new one, but this will just clog everyone's inbox up with more "review request" and "abandoned review" emails.

    A better way would be to create a shelveset that you can attach to your original review.  Here's how:

    1. It is probably best to restart Studio before continuing, otherwise, it might fail again when you try to create another shelveset.
    2. Go into your review and copy the name of the missing shelveset (highlighted in yellow in the screen grab above).
    3. Go to Pending Changes and click on Shelve.
    4. In the box that appears, paste the name of the shelveset you copied in step 2 into the yellow box that says "Enter a shelveset name".
    5. Press Shelve.

    If you now go back to your review, you will now see that it now has a shelveset associated with it and can be reviewed.

  • Data Pack v3 Job Grade Breakdown Chart Data

    When making any modifications to the Data Pack v3 Job Grade Breakdown slide, it is not entirely obvious how the charts work and the data is populated. The aim of this article is to clear up any confusion and help you find how to go about amending them.

    The job grade breakdown slide in the v3 data pack is (at the time of writing) slide #17 in the template file and looks like this.

    The slide is in two sections, the job grade response rate triangle on the left and the job grade factor charts on the right. This article will concentrate to the factor charts on the right.

    Each chart is populated using data which is held in an attached worksheet. You can access this worksheet by right-clicking on the chart and selecting 'Edit Data' in the context menu.

    The worksheet will then appear and look something like this.

    Each of the factors has it's own row. The columns represent a range for the percentage difference score for each of the factors. i.e. if the Leadership factor had a percentage difference of -4, the value would be populated in column F for row 2.

    Each column has a series which appear when a value is present in the cell. Each of these series have a different colour for each of the difference ranges. i.e. a difference of anything below -14 will have a dark red colour and as you get towards 0, the red colour becomes lighter. As you get in to positive scores, the colour becomes green, getting darker as you move away from 0.

    Code

    When a request is generated by the user, the WIT service picks them up, recognises what type of data pack is requested, then creates a generator to generate the pack.

    In the v3 data pack generator, there is a method to populate this slide. In this method, the code uses OpenXML to populate the worksheet data for each of the charts. When you view a presentation, what is actually showing in the chart is cached data generated from the worksheet data. For this reason, so the chart appears correctly when you first open it, the code also populates the cached data.

    Example

    Below is an example of a generated data pack showing real data.

    And this is the populated chart data for the top chart.

    Making Changes to the Series Colours

    If you need to make changes to the colours of the ranges, you must follow these instructions for each of the three charts:

    1. Make sure that you check out the template file from source control by finding it in Visual Studio's Solution Explorer and right-clicking on it.
    2. In the menu select 'Check Out for Edit...'.
    3. Open the template file and navigate to the slide.
    4. Right-click on the chart and select 'Edit Data'.
    5. In the worksheet add a value (i.e. '1') in every cell for the range you wish to change colour. For example, if I want to change the colour for any value below or equal to -15, I would add a 1 in cells B2..B9.
    6. Close the worksheet and you should now see the series for that range.
    7. Click on a bar in the series making sure that they are all selected (or select them all with a shift-click on each of them).
    8. Right-click on one of the bars and select 'Format Data Series...' from the context menu.
    9. In the panel that appears in the right of the screen, select the 'Fill and Line' option (the tin of pouring paint icon).
    10. Select 'Fill' and next to the Color option, there is a menu button the select the colour.
    11. Change the colour to your desired colour.
    12. Right-click on the chart and select 'Edit Data'.
    13. Remove all the values in the cells that you added in step 2 and close the worksheet.
    14. If you need to do the same for another series, repeat these steps for the different range.
    15. Save and close the file.
  • Task and check-in sizes

    Task escalation and large shelf-sets is something which we've all encountered at one point or another. As developers we have a tendency to get into a task and just keep writing code until it is finished, without ever taking a moment to step back and see if there is a more modular approach we can adopt.

    There are several reasons why we should be aware of this situation and take steps to mitigate it.

    1. Think of the reviewer! I was asked to review a new chart for WIT. The development had started a couple of months earlier and by the time it got to review there were 51 files in the changeset.

    Attempting to review a changeset of this size and complexity is very difficult and very time consuming. The biggest problem with this is the complexity and expecting the reviewer to be able to keep track of all the changes you've made, their implications and anything you might have missed. It is simply too big to be reviewed with any confidence.

    2. Merge hell. The longer you go without checking your work into the depot, the more likely you are to encounter merge issues when you finally come to check it in. When you get merge issues, in the best case you need to work out what additional testing you need to perform. In the worst case you have to rewrite some of your change to accommodate changes made by someone else.

    So, what can we do to mitigate these problems? Well, essentially things we should be doing anyway: plan and design your change before you start writing any code. Before you start writing any code, you should have a clear idea of all of the changes required. To mitigate the risk of producing giant changesets, simply look at the list of changes required and break it down into manageable chunks.

    For example, consider the changeset shown above to add a new chart to the executive summary in WIT. Rather than create a single task titled "Create Chart" as happened in this case, think about the steps involved in generating a chart:

    1. Get the chart data out of the cube
    2. Update the domain logic to determine if we need to get the chart data for this chart
    3. Update the application logic to handle a request for this chart
    4. Update the UI to request and display this chart

    Each of these tasks could be tackled in isolation. Some of them can even be done independently meaning you would not end up waiting for a previous changeset to be reviewed.

    Probably the easiest way for you to monitor this for yourselves is to impose a maximum time between check-ins. I would say that you should be aiming to check something in twice a day and should definitely be committing something once a day.

    With splitting tasks down there is of course a danger that you will end up waiting for something to be reviewed before you can carry on with a particular task. If everyone is following this practice, and you get into the habit of actioning all review requests when you submit a piece of work, then we should have plenty of throughput on the reviews minimising any hold ups.

  • How do I create an Account?

    To create a new account, you should execute the following procedure: 'prc_Account_insert'.

    Modifying this procedure should tell you all of the values you need to enter.

    Setting the password as 'Password_not_set' will require the password to be reset on first use. This can be done either by using the 'forgotten password' link or by following this guide.

    TFS Item: 72596 shows an example of creating a Best Companies account.

  • Can I queue a gated check-in again without re-checking-in the work?

    So the scenario is the build failed and you now want to submit the work to the build again, but you decided the work itself does not need to change.  So it is good to submit it again, maybe the build failed due to a server error for example, or a merge conflict in the schema or other file.

    Use Team Explorer to select the BC Website Team Project, then the Builds section.  The gated build is BC_Website CI, right-click on this and click Queue New Build...

    The below window will then be displayed, you need to pick the shelve-set that contains your work, this will start with Gated at which is system generated name when you check-in.  Then tick the box underneath that says you want the changed to be checked-in if the build works.

    Finally click Queue.

    Hopefully you'll find this a bit quicker than unshelving and re-submitting your work again.

  • Editing Best Practice In WIT

    The 'Best Practice' examples you see in WIT are stored against a companyID and image in the 'BestPractice.Dat_Examples' table.

    If you need to delete a Best Practice example you may also need to delete from the following tables that contain foreign key references in this order:

    - BestPractice.Lnk_Example_Factor
    - WIT.Dat_Best_Practice_Flags
    - BestPractice.Lnk_Example_SubFactor
    - MC3.Lnk_Account_Favourite_Best_Practice_Example
    - BestPractice.Dat_Examples.

    The example should then no-longer appear.

  • What can I do if a client doesn't have access to the new WIT Features?

    If a client has purchased a Pro Pack, but can't access the new Features in WIT, the chances are that their WIT item has been processed as an additional item rather than as the catalogue item.

    You can check this by writing a script and looking in the Dat_Order_Lines_Catalogue_Items table for the company and survey in question. If there is no WIT Pack in this table, then you can change your query to look in the Dat_Additional_Items table.

    Where there is an additional item for a WIT Pack, you will need to change the order so that it is pointing at a catalogue item instead, and also add the WIT Features into Dat_Company_Survey_Wit_Features too.

    To guide you in the right direction, you can search in the schema solution for a previous schema change to resolve this issue.

  • Creating a Duplicate WIT Account

    The User Story: 67870 Contains an example of creating a duplicate Bespoke Survey for a company

    If a request is put through to create a duplicate WIT account for a company, the first thing you need to do is check for any anonymity issues that could single out a single persons response. Members of the research team can often help with this.

    Once you have confirmed that there are no issues preventing you from giving a company this data. You should follow the steps below:

    1. Execute the 'prc_CreateNewSurvey' procedure.

    2. Execute the 'prc_UpgradeCompany_insert' procedure. This is the procedure called in the 'Upsell Routine' in back office and will copy across data from one survey to another. Choose a similar survey to copy from e.g. 'BCI Survey 2015' to 'Mouchel 2015'.

    Note: Remember to populate the @CreateNewSurveyNumbersForDestCoSurvey if you are changing anything to do with employee surveys such as removing employment groups. This will allow you to edit the employees independently between surveys i.e. you can deactivate an 'EmployeeSurvey' in the new survey without affecting the original 'EmployeeSurvey'.

    3. Insert into the Dat_Company_Survey_WIT_Access table to allow the company access to WIT.

    4. If you have created new survey numbers, create a mappings variable table, map your EmploymentGroups to the EmploymentGroups in the survey you are copying from on the employment group names then store the 'IDs'. Then insert the mappings into this table joining on the 'EmploymentGroupID's' from the original survey.

    5. Insert the new 'EmploymentGroupIDs' and the original linked 'EmploymentGroupIDs' from the variable table into the 'Dat_Employment_Group_Mapping' table.

    6. Look in the 'WIT.prc_ComparableSurveys_Bulk_insert' procedure. This calls a select procedure at the end. You will need to use the code from this select procedure to create a SingleIntegerTable of comparable surveys from the original survey. Use the 'Bulk insert' procedure to insert these survey id's against your new 'SurveyID'.

    7. If you have made any changes to the employees in the new survey you will need to re-populate the 'BCI Scores'. Do this using the 'OverallSurveyResults_Insert' procedure.

    8. To copy over the benchmarks from the original account, you will need to select the ranges from the original survey and insert them into Ref_WIT_Company_Ranges against the new CompanySurveyID.

    9. The Client Liaison will not be set for the new company survey so you should set this. Also, the companies WIT accounts will need to be assigned over to the new WIT.

    10. If the company has created any bespoke demographics in the survey you are copying from you will need to create them for the new survey. Don't worry about the demographic answers, as these should have been copied across in the up-sell routine.

    11. As you have created a completely new survey you will need to populate the data warehouse and process the cube to be able to view the data. Remember when submitting your work to request a WIT refresh at the end.