Mastering TypeScript Book : Coming April 2015

With the amazing growth of the TypeScript language and compiler over the past two years or so, it is surprising that there are relatively few books on the subject.  The books that have appeared to date have done a decent job of explaining the language features, but hardly any have taken that extra step into working closely with a particular JavaScript framework. 

Learning TypeScript is one thing, but learning to write Backbone.js applications with TypeScript is another.  Let alone Angular.js, or ExtJs, or Marionette.js, or require.js, or node.js for that matter. 

Each framework has it’s own peculiarities, syntax, object-creation lifetimes, and compatible libraries.  Some frameworks are able to use TypeScript’s natural language features, like interfaces and inheritance.  Some libraries cannot. 

So we are left with a number of questions:

  • How do we choose between these frameworks ?
  • What are the differences between these frameworks when writing with TypeScript ? 
  • How do we unit test our code within these frameworks ? 
  • How do we implement object-oriented design patterns within these frameworks in TypeScript ? 
  • How do we build Single Page Applications for the web using TypeScript ?
  • How do we write declaration files?
  • How do we use generics ?
  • How do we use modules in node.js and require.js ?
  • Can we use dependency injection in TypeScript ?

Announcing “Mastering TypeScript”

Over the past few months I have been closely working with the PAKT publishing team on their next book in the TypeScript range, named “Mastering TypeScript”. I am pleased to announce that it is scheduled for publishing next month, April 2015.

You can read all about it here : https://www.packtpub.com/web-development/mastering-typescript

B03967_MockupCover_Normal

Have fun,

– blorkfish

SQL Server Database Source Code control with DBSourceTools 2

Scripting Existing databases

This is the second blog on using DBSourceTools to help source control SQL Server databases. You can find the first part here, where we discussed the benefits of source controlling databases, and went through a step-by-step process of starting up a new project, deploying a target database, and then including patch scripts as part of this deployment process.

This blog post is geared towards working with existing databases, and is a guide for projects where a database is already in place, or where you would like to use a TEST instance of your database as your source snapshot.

For this tutorial, we will use the AdventureWorks sample database from Microsoft. You can download a version of AdventureWorks for any flavour of SQL Server from their codeplex site:

http://msftdbprodsamples.codeplex.com/

New project

Fire up DBSourceTools, and select File | New | Project.

UsingDBSourceTools_screenshot_21

Give the project a name, and point it at a drive on disk. In the screenshot below, we have called our project local_AdventureWorks_0.0, and used d:\source\AdventureWorks as the base directory.

clip_image002

Click on the Database button, and fill in the details on the next form. In this example, we have called the database local_AdventureWorks_0.0, which is the same naming convention that we used in our previous tutorial. This naming convention is serverName_DatabaseName_Version. If you were pointing to a TEST instance of a database, then this servername should match the database server name where the database resides. Using a naming convention makes it clear where the source database originates from:

clip_image004

You can use the Databases button to bring up a list of current databases on the server, and simply pick the correct database. Remember to click on the Test Connection button before the OK button will be enabled. This Test Connection will warn you if you do not have the correct privileges on the server to perform the scripting step. Hit Ok, and then Yes to script the AdventureWorks database.

Once the process has finished, expand the local_AdventureWorks_0.0 icon in DBExplorer to see what database objects were scripted:

clip_image006

Now would be a good time to save the project : File | Save Project. Click on Yes to refresh the data from the database.

Adding a target database

The process of adding a target database is the same as we went through in the first article. Select Database | Add | New Deployment Target from the File Menu.

UsingDBSourceTools_screenshot_22

Fill in the required fields in the New Database Connection screen:

clip_image008

Again note that the Nick Name can be anything, but database NickNames must be UNIQUE across an entire DBSourceTools project. I have stuck to the same naming scheme as we used before for Target Databases: deploy_machineName_DatabasName_Version.

Again note that we can simply type the Database Name in the Database text box at the bottom of the screen – and it is not necessary to click the Databases button or the Test Connection button when creating a Target Database.

Click on OK, and then expand the nodes in DBExplorer to see that the Deployment Target has been created successfully:

clip_image010

Now would be a good time to Save the project. You can safely say No when asked to refresh the data from the database, as we have already done this step previously.

Setting scripting options.

DBSourceTools allows you to have fine-grained control over which database objects are scripted, and included in your deployment. You may just want to script a specific subset of tables, or you may want to script all database tables, but just include data for your “configuration” tables. In this tutorial, we will script all database objects, and all data.

To configure these options, right-click on the local_AdventureWorks_0.0 source database, and select Properties:

clip_image012

This will bring up the Source database properties screen.

clip_image014

This screen includes a set of checkboxes and buttons at the bottom of the page that control which database objects to include in the deployment process.

Including data.

Click on the Tables button to see which tables will be included in the deployment process:

clip_image016

This screen shows all tables within the source database, and has options to script the table, Script the data, or both. Click on the Data dropdown on the menu-bar, and select Script all.

clip_image018

This will enable the Script Data checkboxes for all tables within the database. Hit the Save button to store these scripting options to disk.

Reload

This table options screen has a reload button on the top left. If your source database has changed since the last time you set scripting options – in this case if the list of Tables has changed, then hit the reload button to reload this list from the source database. This ensure that the scripting options stored on disk, and used by the deployment step are in sync with the source database.

This process is similar for Procs, Views and Users, and each screen will allow fine-grained control over which database objects should be used in the deployment step.

Main options

The Source Database properties screen also has checkboxes marked for each of the database object types. Un-checking this “main” checkbox will exclude all objects for that category. Generally, you would want each of these checkboxes to be “on”.

Make sure that the Data checkbox is also “on”, because without this, all of the per-table settings will be ignored.

clip_image020

Once again, after making changes, Save the Project.

Writing Targets

The final step in configuring a source database is to write out our deployment targets. Navigate on the left-hand side to the Deployment Targets node, right-click, and select Write Targets. Select No to refresh data from database.

UsingDBSourceTools_screenshot_23

Each time you write targets, you are given the option of refreshing data from the database. If you would like to refresh the data at any time, then simply Write Targets, and select Yes for DBSourceTools to refresh the data from the source database.

The Write Targets step simply updates the Run_Create_Scripts.bat file based on the database objects found within the source database, and combines this with your scripting options . If you double-click on the Run_Create_Scripts.bat file, you will find that DBSourceTools simply runs sqlcmd to create tables, and then DBSourceDataLoader.exe to load data.

DBSourceDataLoader.exe is optimized to use SQLBulkCopy routines, and can load million record tables in a matter of seconds. The speed of the data loader is only constrained by the speed of your local development machine.

Deploy the target.

Once we have finished the Write Targets step, we can right-click on the deploy_local_AdventureWorks_0.1 Target database, and select Properties. This will bring up the Target Database properties screen:

clip_image022

Simply hit Deploy Target from the menu-bar of this screen.

clip_image024

The Deploy Target is a confirmation screen showing your target database, it’s server, and which batch file DBSourceTools will be using. This is your final chance to cancel a deployment if you hit the button inadvertently. Note that DBSourceTools does a destructive deployment, so it will completely remove the target database, recreate it, and re-deploy the database schema and structure.

Hitting OK here will start the Deployment process:

clip_image026

In the screenshot above, one of the stored procedures is generating an error, and so the output text is coloured in red.

Common deployment errors

One of the most common errors when scripting source databases is caused by having different directory names for the .mdf and .ldf files.  If your deployment results screen shows up a lot of red errors, then this is most probably the cause:

UsingDBSourceTools_screenshot_24

On most servers, database files and log files are written to a specific directory, and generally not on the C:\ drive.  DBSourceTools uses the SQL Server Management API to generate the CreateDB.sql script when a new source database is created – to ensure that all of the database settings are correct.

To fix this problem, simply navigate to the CreateDB.sql script within the DBExplorer, and double click on the icon.  This will open up a new text editor window showing the CreateDB.sql script. 

image

Notice on the second and fourth lines that the script uses a FILENAME parameter which has the physical path to both the .mdf and .ldf files.  This full path MUST exist on your local machine.  To fix these errors, either create this directory on your local machine, or modify it to the same directory as all of your other .mdf and .ldf files.

Viewing deployment results

Each deployment will create a Deploy_Results_<datetime>.txt file in the deployment target directory. You can always open this text file to view the deployment logs.

Adding the target as a source

As we saw in the first blog on DBSourceTools, you may want to include the newly deployed database as a source database in DBExplorer. This allows you to see both databases side by side, and allows for schema comparisons, data scripting options and much more.

To add the newly created database as a source database, simply click on Database | Add | New Source Database.

clip_image028

Give this new source database a NickName – which is generally machineName_DatabaseName_Version – so in this case local_AdventureWorks_0.1.

clip_image030

Remember that you can click on Databases to bring up a list of the available databases on the machine, and that you must click Test Connection before the Ok button will be enabled. Once you have hit Ok, select Yes to script the new database in DBSourceTools. You should now have both databases showing up in DBExplorer:

clip_image032

Checking the number of records

DBSourceTools has a number of built-in script utilities. To generate a sql script that will count the number of records in your database, right-click on the database, select new query and then the Count Rows option.

clip_image034

This will generate a script in the Queries directory that simply counts the number of rows in the database. When the query is shown, simply hit F5 to run it.

clip_image036

Summary

In this post, we have shown how to setup a new DBSourceTools project, and how to use an existing database as a source database.  We then viewed the source database scripting options, and set options to script data for all tables in the database.  We then created a deployment target for this source database, updated the Run_Create_Scripts.bat file by using the Write Targets process, and deployed the database to our local machine.

SQL Server Database Source Code control with DBSourceTools

No source control of databases

All too often I find that development teams will meticulously source control and code-review changes to their application source code, but this process is never applied to databases. In just about every TEST, UAT and even PROD database that I work with, changes to the database schema over time will leave broken stored procedures, broken views and even orphaned child records. By broken, I mean that the stored procedure or view is relying on fields that have been re-named or removed from the underlying tables. These procedures will never run successfully until they have been identified and fixed.

Problems with a common DEV database

Changing database schemas present even an even greater problem when development teams all use a common DEV database. If one developer applies a patch to a common DEV database as part of his check-in process, it can easily break everyone else’s environment and unit tests until each developer in turn updates their code to match the latest check-in. Even worse, if this check-in has introduced a bug, and causes further unit tests to fail on a build server, then the entire team is forced to scramble and try to fix the broken build, or even roll back the offending changeset, and restore the database to allow other developers to continue working. Using a common DEV database just does not make sense.

This common database approach can also lead to serious headaches when a source tree has been branched for either a feature or as part of a production deployment. If schema changes occur on the trunk branch, this can seriously impact anyone still using the older schema on a production branch. Again, using a common DEV database just does not make sense.

Use an isolated database instance per developer

The sensible way of dealing with schema changes is two-fold.

Firstly, each developer must have their own instance of the database as a “play pen” – so that they can make schema changes or massage data without interfering with another developers working environment.

Secondly, each developer must have a quick and easy way to restore a database to a point in time – before their changes were applied. In this way, scripts that developers are writing can be re-run against a “clean” version of the database, effectively testing these scripts before checking in.

Moreover, when a source code tree is branched, the database schema and data at that time should be branched along with the source code, so that any developer working on the branch will be able to recreate a “clean” database with the schema that relates to that branch.

Include your database in Source Control

The solution to these database dilemmas is surprisingly simple. Script out your entire database and store these scripts on disk, and check them in to source control. These scripts are then used to re-create each developer’s working database. As the scripts are in source control, you now have a fully source controlled and versioned database. This gives developers the same freedoms as normal application code, with the ability to branch, merge, check-in, checkout with freedom.

The only problem is that this scripting process should be as easy as compiling source code in an IDE. That is, checkout from source, open up your IDE, compile your code and check for errors. This is where DBSourceTools comes in.

DBSourceTools

DBSourceTools is designed to help developers source-control their databases. It will script an entire database to disk. Once these scripts are on disk, they can be used to re-create the database in it’s entirety from these files on disk. Adding these scripts to source control allows developers to re-create a database as it was at a point in time in source control.

This mechanism is very similar to Microsoft Database Projects – Microsoft themselves generate scripts within a database project, and deploy these scripts ( essentially compiling the database ) under the covers.

The following diagram shows the basic usage of DBSourceTools:

clip_image002

1. Connect to a source database. If using an established database, then this is usually the TEST or UAT instance.

2. DBSourceTools will script all objects within this database, and (optionally) it’s data to local disk.

3. This directory structure is then committed into Source Control.

4. DBSourceTools then loads these scripts from disk, and includes any patches in the patches directory to be run after the database is created.

5. DBSourceTools then deploys the database to a new Target Database ( usually on the local SQL instance), loads all data, and applies any patches.

a. Note that this is a two step process, DBSourceTools will DELETE the target database, and then completely RECREATE it from scratch.

6. These patches can then be added to source control.

Scenario 2 :

Once added to Source Control, a second developer can re-create this database without a connection to the original source database – as all required objects and data are part of the files on disk. The following diagram shows this process:

clip_image004

1. Update source tree on local disk from Source Control

2. This update will fetch all required scripts, data and patches from Source Control.

3. Run DBSourceTools to load the project.

4. Deploy the target database ( usually to the local SQL instance ).

Benefits of using DBSourceTools.

All developers use their own local instance of the database.

This means that two developers can make their own schema changes to an isolated instance of the database independently of each other, and not step on each other’s toes. Data Access Layer objects can be modified, and will only take effect once both the code and the database patches are committed to Source Control.

Databases are an instant in time.

Because all database objects are scripted to disk, and DBSourceTools DELETES and then RECREATES it’s Target database, a developer can quickly and effectively RESET the database to an instant in time – before any changes were made.

This instant in time is protected by Source Control, so going back to a specific changeset means that the database can be re-created at the instant in time that the changeset was created.

Patching

Using the very simple patching mechanism, developers can easily test whether their patches will be successful, in an iterative manner:

  • · Delete and re-create the database from source – to an instant in time before any changes were made.
  • · Write sql scripts, and test it against the database.
  • · Bundle these sql scripts into a patch, and include it in the Patches directory.
  • · Delete and re-create the database in one step, including the new patch.
  • · Ensure that the patch worked correctly.
  • Merging changes from other developers.

    When patches are added to source control from other developers, it is a simple matter of updating the patches directory with their changes, and re-deploying the database. DBSourceTools will run all of the patches in one go – thereby checking to see whether your patch works correctly with new patches committed by other developers.

    If your patch does not work correctly because of other patches, you can easily modify it, re-run it again and again before checking into source control.

    TEST, UAT and PROD patching.

    If you are unsure whether a patch will work correctly on TEST, UAT or PROD data, then it is a simple matter of getting a backup of any of these databases, scripting them to disk, and running through the patches as normal.

    Database compilation.

    By re-building a database from source files, errors in a database schema can be quickly found. As databases evolve over time, quite often stored procedures or views become un-usable because they are targeting fields or tables that have been removed or deleted. By re-building a database from scripts, these problems can be quickly identified and resolved.

    Data included

    DBSourceTools has powerful options to select which data should be scripted to disk. Select all tables, or just “configuration” tables when scripting a database to disk. DBSourceTools uses the SQLBulkCopy routines, and can load millions of records in a matter of seconds.

    By including data with your database, applications can be load tested with data volumes, or debugged against PROD or UAT data – all within the safety of a local SQL instance.

    Step-by-step Tutorial

    Let’s go through the process of using DBSourceTools in a step by step manner. We will start with a blank database, and then use the patching mechanism to create some tables and insert some data.

    Create a blank database

    To start off with, create a blank database on your local SQL instance – using SQL Management Studio, and call it TutorialDb_0.0.

    clip_image006

    Loading a Source Database

    Now fire up DBSourceTools, and select New Project. This project will need a name, which we have specified as Tutorial_Project_0.0, and it will need a base directory on disk – which we have chosen to be d:\source\TutorialDb:

    clip_image008

    Now click on the Database button. This will give you the following database screen:

    clip_image010

    A database Nick Name can be anything, but nick names must be UNIQUE across a project. I prefer to use the source server name as the prefix, then the database name, and then a version number. If you were scripting this database from a TEST environment, then I would name this database TEST_TutorialDB_0.0, or if from PROD, then PROD_TutorialDb_0.0.

    You can connect to any server, use Windows Auth or SQL Auth. Once you have selected an Authentication scheme, click on the Databases button to bring up a list of databases on that server, and select which one you require.

    Then click on the Test Connection button. The Test Connection button simply checks to see whether you have the correct permissions on the source database to allow for scripting. If not, you will need to modify the permissions on the source database to allow for db_owner privileges.

    Once the connection has succeeded, click on OK, then Ok again. Your source database connection is now setup. DBSourceTools will then prompt if you would like to load the database now. Click Yes.

    Once the load process is finished, you will see your source database on the DBExplorer panel on the left hand side of the screen.

    image

    It is a good idea to save this project at this stage, so Click on File | Save Project, and then click on No when DBSourceTools asks you if you would like to Refresh the data.

    Creating a Target Database

    Our database does not have anything in it as yet, but let’s create a target database so that we can start using the patching engine. Click on Database | Add | New Deployment Target

    clip_image014

    This brings up a similar database connection dialog as follows:

    clip_image016

    The only required fields on this screen are Nick Name, and Database. Note the naming convention for the Nick Name. I always prefix a deployment Target database with the word deploy, followed by the servername, followed by the database name, and an incremented version. Again, these nick-names must be UNIQUE across a DBSourceTools project.

    It is not necessary to click on the Databases or Test Connection buttons when creating a Target Database.

    Clicking OK here will create a deployment target database that is under the tree structure of your original source database. Expand the tree until you can see this new database.

    image

    Again, remember to Save the project now.

    Deploying the Target Database

    To deploy our Source database to our Target database, right-click on the Target database, and select Properties:

    clip_image020

    This will bring up the Target database properties in the panel on the right hand side:

    clip_image022

    Click on the Deploy Target button. This button will open a new window, and execute the Run_Create_Scripts.bat file which is on disk, and is a child of the deploy_local_TutorialDb_0.1 directory:

    clip_image024

    This new database (local_TutorialDb_0.1) should now be created on your local SQL server.

    Creating Patches

    Now that we have deployed our source database to the target database, we can start creating patches. These patches will be attached to our Source database – under the patches directory of the deployment target. When writing and creating patches, I always find it handy to have both Source and Target database available within the same project.

    Add your target database as a source.

    Click on the Database | Add | New Source database to create a new Source database within the same project:

    clip_image026

    This source database will actually be the Target ( local_TutorialDb_0.1) database that we deployed earlier. Use the New Database Connection dialogue to specify this as the source database:

    clip_image028

    Note that I have used the same naming convention for source database Nick Name as earlier : machine name, database name, version number. We can also click on the Databases button to select the local_TutorialDb_0.1 database from the available list, and then we need to click on Test Connection before the OK button will become available.

    Once you have hit OK, Click on Yes to load the database into DBSourceTools.

    clip_image030

    This will load the new database as a source database, and include it in the DBExplorer:

    clip_image032

    Remember that our source database is the one at the top, and has a version number of 0.0. The database that we are deploying to is on the bottom, and has a version number of 0.1.

    Creating and Scripting Tables.

    This new source database (local_TutorialDb_0.1) is now a “Playground” instance that we can use to create tables, insert data, or generally design our new database in. Once we have made changes to this database, we will need to ensure that we create patches from these new tables, views, etc, and include them in the Patches Directory of our deployment target.

    Deploying from a source database ( 0.0 ) to a target database ( 0.1 ) will completely delete the target database before re-creating it and running patches. So remember that the “Playground” database can be wiped clean at any stage, and you can start from scratch if you make any unrecoverable mistakes.

    You can create a new table using SQL Management Studio , or simply by running sql scripts, or in whatever way you like.

    Once created, though, make sure that you use DBSourceTools to script your database tables. When scripting tables from SQL Management Studio, the generated scripts DO NOT include any indexes that you may have created on the table, or in fact any related objects. You will need to generate scripts for your indexes in a separate step, and then combine these scripts to re-create your table successfully. This is obviously error-prone and time-consuming.

    DBSourceTools will script tables, indexes and any related objects in one step.

    As an example of this, let’s create a new table using DBSourceTools.

    Create a Table

    Right-click on the 0.1 version of your database, then select New Query, New Table to generate a sample script for creating a new table:

    clip_image034

    The resulting script has the basics of the sql that you will require in order to create a new table. Note that most of the script is just a list of commented SQL datatypes, inserted as handy reference should you want to refresh your memory on how to use different datatypes.

    At the bottom of the script is line to add a constraint for a primary key – don’t forget to fill in the blanks here – all tables should have a primary key !

    Modify the script to look something like this:

    CREATE TABLE [dbo].[MyFirstTable](

    [Id] [bigint] identity(1,1) NOT NULL,

    [Name] [nvarchar](25) NOT NULL,

    [Description] [nvarchar](100) NULL,

    CONSTRAINT [PK_MyFirstTable] PRIMARY KEY ( [ID] ASC )

    ) ON [PRIMARY]

    GO

    Executing queries

    To execute the current SQL query against the current database, simply it F5.

    clip_image036

    Saved Queries

    Notice the DBExplorer on the right-hand side. DBSourceTools has created a new Queries directory under local_TutorialDb_0.1, and saved your create table script as Query.sql. This Queries directory will be used for any scripts that DBSourceTools creates – so is a quick and handy way of going back to older scripts.

    Any Query under the queries directory will be run against it’s parent database by default, so is a handy “ScratchPad” area to use when working with and running scripts.

    Reloading from Database

    All well and good so far, but our new table has not appeared in the DBExplorer tree view as yet. This is because DBSourceTools by default loads databases from Disk, not the database. What we will need to do now is to refresh what is on disk with what is actually in the database. To do this, right-click on the source database, and select Load from Database:

    clip_image038

    This will refresh the database structure from the updated database.

    Once this is complete, you will see the MyFirstTable appear under the Tables node of the database.

    Clicking on the expand tree icons, you will notice that DBSourceTools adds some handy features when working with database objects.

    clip_image040

    Firstly, double-clicking on the table name will bring up a source code view of the table definition. Secondly, the table has a Data icon. Double clicking on this data icon will open up a new window, and immediately show all data in the table. In SQL Management Studio, this is a two-step process – you need to right-click on the table and then click select top(1000) to have a quick view of your data.

    Thirdly, there is a Fields icon, and expanding this will show a list of field names and their data types.

    To view the SQL script definition of the table, simply double click on the table name:

    clip_image042

    Inserting Data

    You can insert data into a table in whatever manner you choose, but DBSourceTools can also help generate a sample script. Right-click on the table, and select Script Insert. The generated script will provide enough information to be able to simply fill in the blanks:

    clip_image044

    We can modify this script pretty easily to insert two records into MyFirstTable:

    insert into MyFirstTable(

    /*[Id] bigint primary key (identity) */

    [Name] /* NOT NULL */

    ,[Description]

    )

    values (

    /*[Id] bigint primary key (identity) */

    ‘First name’ /*Name NOT NULL nvarchar */

    ,’First description’ /*Description nvarchar */

    )

    insert into MyFirstTable(

    /*[Id] bigint primary key (identity) */

    [Name] /* NOT NULL */

    ,[Description]

    )

    values (

    /*[Id] bigint primary key (identity) */

    ‘Second name’ /*Name NOT NULL nvarchar */

    ,’Second description’ /*Description nvarchar */

    )

    Now hit F5 to run the script.

    Viewing and Scripting Data

    Double click on the data icon in the DBExplorer under the MyFirstTable icon:

    clip_image046

    This will bring up the data view, showing all records currently in the table. From here we can easily create an insert script to include this data in a patch. Simply click on the Script Data button in the Data Window. The generated script will automatically set identity insert on and then off to preserve our identity seed on the Id column, and also set a nocount on for running the script:

    SET NOCOUNT ON

    SET IDENTITY_INSERT [MyFirstTable] ON

    insert into [MyFirstTable] ( [Id],[Name],[Description] ) values ( 1,’First name’,’First description’ )

    insert into [MyFirstTable] ( [Id],[Name],[Description] ) values ( 2,’Second name’,’Second description’ )

    SET IDENTITY_INSERT [MyFirstTable] OFF

    SET NOCOUNT OFF

    This script and the table definition are now ready for patching.

    Creating patches

    To include our new database table definition, and it’s data in a deployment step, we will now create two patches under the patches directory of our original source database. Use the DBExplorer window to expand the tree as follows: local_TutorialDb_0.0 > deployment targets > deploy_local_TutorialDB_0.0 > Patches.

    Right-click on the patches icon, and select new patch:

    clip_image048

    Fill in the patch name. Note that patches are loaded alphabetically, so make sure that you number your patches. We will create a Patch_001_table_MyFirstTable as follows:

    clip_image050

    Create a second patch using the same process, and call this patch Patch_002_data_MyFirstTable.

    Double clicking on a patch will open up the script in an editor window. So edit the Patch_001_table_MyFirstTable, and copy the definition of the MyFirstTable into it. Remember that double-clicking on any table name will bring up the database script used for the table – so find the table MyFirstTable, double-click on it, and copy the create script. Paste it into Patch_001, and save the file.

    Edit the Patch_002_data_MyFirstTable by simply double-clicking on it. Copy and paste the insert script created into the previous step into this patch, and save the file.

    Including the patches in the deployment script

    DBSourceTools uses a simple batch file to run the deployment scripts, load tables and data, and run the patches. The file that it uses is called Run_Create_Scripts.bat, and lives as the first file underneath the deployment target database. Double-click on this file to see what it contains:

    set DB_BASE_DIR=D:\source\TutorialDb\local_TutorialDb_0.0\

    set BASE_BIN_DIR=C:\Program Files (x86)\DBSourceTools\

    set PROJECT_BASE_DIR=D:\source\TutorialDb\

    set PATCH_DIR=D:\source\TutorialDb\local_TutorialDb_0.0\DeploymentTargets\deploy_local_TutorialDb_0.1\

    rem

    sqlcmd -S (local) -E -i %DB_BASE_DIR%DeploymentTargets\deploy_local_TutorialDb_0.1\local_TutorialDb_0.1_DropDB.sql

    sqlcmd -S (local) -E -i %DB_BASE_DIR%DeploymentTargets\deploy_local_TutorialDb_0.1\local_TutorialDb_0.1_CreateDB.sql

    As we can see, this file is just setting some global variables, and then running a DropDB and CreateDB script. We now need to update this script to include our new patches.

    This process is called Writing Targets. Using the DBExplorer, right-click on the Deployment Targets node of the source database, and click Write Targets.

    clip_image052

    This process also includes the option of refreshing data from the source database. At this time our source database is blank, so we can safely say No here.

    clip_image054

    Once this process has finished, open up the Run_Create_Scripts.bat file again. If you already have this file open, you may be viewing the in-memory version of this file, so it is always safer to close the file first, and then re-open it by double-clicking on the file.

    Note how DBSourceTools has added our two patches at the bottom of the script:

    sqlcmd -S (local) -E -i %DB_BASE_DIR%DeploymentTargets\deploy_local_TutorialDb_0.1\local_TutorialDb_0.1_DropDB.sql

    sqlcmd -S (local) -E -i %DB_BASE_DIR%DeploymentTargets\deploy_local_TutorialDb_0.1\local_TutorialDb_0.1_CreateDB.sql

    sqlcmd -f 850 -S (local) -d local_TutorialDb_0.1 -E -i "%PATCH_DIR%Patches\Patch_001_table_MyFirstTable.sql"

    sqlcmd -f 850 -S (local) -d local_TutorialDb_0.1 -E -i "%PATCH_DIR%Patches\Patch_002_data_MyFirstTable.sql"

    This Run_Create_Scripts.bat file is at the heart of the deployment process. Any time that we deploy a target database, this script will be run. Make sure that whenever you add patches or change options on the source database, you remember to do the Write Targets step to update this file.

    Killing databases.

    To re-deploy our database including our new patches, simply right-click on the target database and select properties. This will bring up the database properties screen, where you can hit the Deploy button to start the deployment process.

    If your script hangs or gives errors during the database drop step, it may be that there are still connections open to the target database, which will interfere with the drop command. To close all existing connections and drop the database in one step, simply click on the Kill database button.

    Let’s use this kill step, and then re-deploy the database:

    Right-click on the deployment database named deploy_local_TutorialDb_0.1, and select Properties.

    clip_image056

    This will bring up the Properties window, with the Deploy Target and Kill Database buttons on the menu bar:

    clip_image058

    Go ahead and click on the Kill Database button to close all existing connections to the local_TutorialDB_0.1 database, and drop it in one step.

    clip_image060

    clip_image062

    We can now hit the Deploy Target button to run the Run_Create_Scripts.bat file and re-create the database:

    clip_image064

    Check to see that there is no red text in the output window – if there is, then something has gone wrong with the deployment.

    Checking output results

    DBSourceTools keeps a copy of each deployment in a text file in the same directory as Run_Create_Scripts.bat If we fire up an explorer and navigate to this directory, we will find a DeployResults_ file for each deployment. The contents of this file are an exact copy of the output window above. If you encounter errors, then have a look at these files as a record of each of your previous deployment runs.

    Verify the Target Database.

    Once we have completed the deployment step, we can re-load our new database from the database, just to ensure that we still have all of our tables and data loaded correctly. To do this, right-click on the local_TutorialDB_0.1 database in the DBExplorer view, and select load from database.

    clip_image066

    Expand the nodes of this database to ensure that it contains the MyFirstTable, and then double click on the Data icon to check that this table has data.

    clip_image068

    Add files to Source Control

    The last step in this process is to add files to your Source Control engine.

    Sharing your database

    DBSourceTools uses full path names for project files and script names. Unfortunately, this means that all developers must have the same path names on their machines in order to share databases. This can be easily accomplished by substituting the same drive letter on each developer machine. Drive substitution is different to mapping a network path, and is accomplished using the subst command in a DOS prompt.

    Lets assume that developer 1 stores his source code in the following location:

    C:\users\dev1\source\

    And developer 2 stores his source code at:

    C:\source

    If the DBSourceTools base directory has been set at d:\source\TutorialDB, then we will need both developers to have the same d:\source\TutorialDB directory structure.

    This can be easily accomplished by using the substitute command to substitute a virtual d: drive to c:\users\dev1. Run the following command in a DOS prompt
    subst d: c:\users\source

    If substitutions are necessary for your developer machine, then you can easily create a quick batch file to do this substitution, and run it on startup.

    The same substitution on developer 2’s machine would simply be

    Subst d: c:\

    This ensures that the directory used by DBSourceTools is the same across both machines: d:\source\TutorialDb.

    Summary

    In this tutorial, we have used DBSourceTools to start with a blank database, deploy it to a target database, modify the target database, and then reverse-engineer our changes back into patch scripts.

    At the end of this process, we can simply hand over the patches to a DBA who will be able to re-create our shiny new database on TEST, UAT and PROD boxes. DBA’s generally create databases themselves, as the disk space requirements and subtle tweaks needed in each environment are slightly different, and they would know best. So DBSourceTools can be used to help write these scripts.

    TypeScript: Using Backbone.Marionette and REST WebAPI (Part 2)

    This is Part 2 of an article that aims to walk the reader through setting up a Backbone.Marionette SPA application with Visual Studio 2013, and in particular, write Marionette apps using TypeScript.

    As always, the full source code for this article can be found on github: blorkfish/typescript-marionette

    Part 1 can be found here, and covered the the following:

    • Setting up a Visual Studio 2013 project.
    • Install required nuGet packages.
    • Creating the ASP.NET HomeController and View.
    • Including required javascript libraries in our Index.cshtml.
    • Creating a Marionette.Application.
    • Adding a Marionette Region
    • Referencing the Region in the Application
    • Creating a Marionette.View
    • Using bootstrap to create a clickable NavBar Button
    • Using Backbone Models to drive NavBar buttons.
    • Creating a Backbone.Collection to hold multiple Models
    • Creating a Marionette.CompositeView to render collections
    • Rendering Model Properties in templates
    • Using Marionette Events.
      In this part of the article, we will cover the following:
    • Creating an ASP.NET WebAPI Data Controller.
    • Unit testing the WebAPI Data Controller in C#
    • Modifying the WebAPI Data Controller to return a collection of nested C# POCO objects.
    • Defining TypeScript Backbone.Model classes to match our nested class structure.
    • Writing Jasmine unit tests for our Backbone Collection.
    • Creating a Marionette.CompositeView to render data in a bootstrap table.
    • Using a Marionette.CompositeView as an ItemView
    • Rendering nested Backbone.Collections
    • Using CompositeView properties to generate html.
    • Using bootstrap styles in a Marionette.CompositeView
      When we are complete with this tutorial, we will have generated a nested Json structure as follows: UserList –> User –> RoundScores –> RoundScore.
      We will then render it as follows:

    image

        So let’s get started.

        Creating an ASP.NET WebAPI Data Controller.

        Creating an ASP.NET WebAPI Data Controller is just as simple as creating a normal MVC Controller.  All we need to do is to derive our class from ApiController instead of Controller, and then specify what the url will be when calling this controller. 

        The latter part is is accomplished by adding two attributes to a method call on our ApiController. These are the [Route] attribute – to specify the url, and a System.Web.Http attribute to specify whether this is a REST get, post or delete.
        Go ahead and create a file under the /Controllers directory named HomeDataController.
        Derive this class from ApiController ( in the System.Web.Http namespace ), and add a Route attribute, as well as an HttpGet attribute as in the code below :
        using System.Collections.Generic;
        using System.Net;
        using System.Net.Http;
        using System.Web.Http;
        
        namespace typescript_marionette.Controllers
        {
            public class HomeDataController : ApiController
            {
                [Route("api/dataservices")]
                [HttpGet]
                public HttpResponseMessage GetDataTable()
                {
                    return Request.CreateResponse<IEnumerable<string>>(HttpStatusCode.OK, GetData());
                }
        
                public List<string> GetData()
                {
                    return new List<string> {"test1", "test2"};
                }
        
            }
        }
        There are a couple of things to note about the code above.
        Firstly, the [Route(“api/dataservices”)] attribute.  This defines the route to our DataController function.  So firing up a web-browser and pointing it to /api/dataservices will hit this DataController. 
        Secondly the [HttpGet] attribute.  This attribute defines the method signature as allowing REST GETs.
        Thirdly, the return type of HttpResponseMessage – and the return syntax : return Request.CreateResponse <type>.  These two signatures will return data JSON when json is requested, or XML when xml is requested.
        Fourth, the HttpStatusCode.OK will return a success callback to any JavaScript calling code.  Interestingly enough, throwing an Exception anywhere in the call stack will return an error callback to any JavaScript calling code.  This is all built-in when using classes deriving from ApiController.  Later on, we will create a Jasmine unit test to test our error callback – to make sure that we are handling errors correctly.
        Next, note that I have created a method public List<string> GetData(), where I could have simply created this List<string> within the GetDataTable() method directly.  Splitting these methods will help us with unit-testing later on in the piece.  One C# xUnit test will target the GetData() function, and then we will use Jasmine to unit-test the returned Json response in GetDataTable().
        Lastly, the IEnumerable<type> syntax.  In the code sample above, we are simply returning a string type.  But further down the line, we will define [Serializable] POCOs to return nested Json, such that returning IEnumerable<MyType> will return full MyType classes – as well as any child classes or collections defined – automatically transformed into Json or Xml.  Cool, huh ?
        Unfortunately, firing up web-browser, and typing in the url /api/dataservices at this stage will not work.  Using IE, you will get the very helpful error message “The webpage cannot be found”, with a 404 status:

      image

      Using Chrome, the error message is slightly more helpful: No HTTP resource was found that matches the request URI ‘http://localhost:65147/api/dataservices&#8217;.

      This is down to one missing line of code.  Navigate to the App_Start directory, and double-click on the WebApiConfig.cs file.  Modify the code to call config.MapHttpAttributeRoutes() as shown below.  This line of code is called on app startup, and simply traverses the code to find our [Http] and [Route] attributes – and then adds them to the Route Table.

      namespace typescript_marionette
      {
          public static class WebApiConfig
          {
              public static void Register(HttpConfiguration config)
              {
                  config.MapHttpAttributeRoutes();
      
                  config.Routes.MapHttpRoute(
                      name: "DefaultApi",
                      routeTemplate: "api/{controller}/{id}",
                      defaults: new { id = RouteParameter.Optional }
                  );
              }
          }
      }
      

      Firing up Chrome at this stage, and navigating to /api/dataservices will now generate the expected result : a string array with two entries : test1 and test2 :

      image

      I have always found that Chrome or Firefox is far easier to work with when manually testing DataControllers.  IE for some reason does not have a default rendering engine for pure data.  It always tries to download the json response – and then you need to select a program to view this data – which is terribly annoying. 

      image

      image

      So stick to any other browser except IE when manually testing DataControllers.

      Unit-testing the WebAPI Data Controller in C#

      Right, so what would a good Data Controller be without unit tests ? 

      As far as I understand, Hamlet D’Arcy is credited with the saying “[Without unit tests] You’re not refactoring, you’re just changing shit.”

      So best we create some unit tests, before we start changing shit…

      Personally, I have been using xUnit as a testing framework for some time now. 

      Add a new project to your solution called typescript-marionette-xunit.  Make sure the project type is a Class Library:

      image

      Delete the Class1.cs file that is automagically created for you.

      Now let’s add the nuGet packages for xUnit.  To to Tools | Library Package Manager | Package Manager Console.

      At the top of the screen, next to the Package source dropdown, there is a Default project dropdown.  Make sure that you have selected the typescript-marionette-xunit project here :

      skitch_screenshot_1

      Now install xunit as follows:

      Install-Package xunit –Version 1.9.2

      Install-Package xunit.extensions –Version 1.9.2

      Next, add a reference to typescript-marionette project: Right-click on Refrences, Add Reference – then choose the typescript-marionette project under Solution Projects:

      image

      Now add a Controllers directory, and then a HomeDataControllerTests.cs class as follows:

      namespace typescript_marionette_xunit.Controllers
      {
          public class HomeDataControllerTests
          {
              [Fact]
              public void GetData_Returns_ListOfStrings()
              {
                  HomeDataController controller = new HomeDataController();
                  Assert.Equal(new List<string> {"test", "test"}, controller.GetData());
              }
          }
      }

      Running this unit-test will produce the following error:

      Assert.Equal() FailurePosition:

      First difference is at position 0

      Expected: List<String> { "test", "test" }

      Actual: List<String> { "test1", "test2" }

      I firmly believe that you should always write a test that fails first, before modifying your code to make the test pass.  Obviously this is as simple as changing the expected List<string> to be { “test1”, “test2” }.

      Modifying the WebAPI Data Controller to return a collection of nested C# POCO objects.

      It’s time now to get the DataController to return some real data.  For the purposes of this article, lets assume we are wanting to return a list of users, and how they scored per round.  The classes involved are as per the following class diagram:

      skitch_screenshot_2

      Create a ResultsModels.cs file under the /Models directory, as follows:

      namespace typescript_marionette.Models
      {
          [Serializable]
          public class UserModel
          {
              public UserModel()
              {
                  RoundScores = new List<RoundScore>();
              }
              public string UserName;
              public string RealName;
              public List<RoundScore> RoundScores;
          }
      
          [Serializable]
          public class RoundScore
          {
              public int RoundNumber;
              public int TotalPoints;
          }
      }

      Now, let’s modify the HomeDataController to return a list of these models.  At the same time, we may as well define a new url (api/results) to return these results:

              [Route("api/results")]
              [HttpGet]
              public HttpResponseMessage GetUserResults()
              {
                  return Request.CreateResponse<IEnumerable<UserModel>>
                      (HttpStatusCode.OK, GetUserResultModels());
              }
      
              public List<UserModel> GetUserResultModels()
              {
                  return new List<UserModel>
                  {
                      new UserModel { UserName = "testUser_1", RealName = "Test User No 1",
                          RoundScores =  new List<RoundScore>
                      {
                            new RoundScore { RoundNumber = 1, TotalPoints = 2 }
                          , new RoundScore { RoundNumber = 2, TotalPoints = 3 }
                          , new RoundScore { RoundNumber = 3, TotalPoints = 2 }
                          , new RoundScore { RoundNumber = 4, TotalPoints = 5 }
                      } },
                      new UserModel { UserName = "testUser_2", RealName = "Test User No 2", 
                          RoundScores =  new List<RoundScore>
                      {
                            new RoundScore { RoundNumber = 1, TotalPoints = 5 }
                          , new RoundScore { RoundNumber = 2, TotalPoints = 6 }
                          , new RoundScore { RoundNumber = 3, TotalPoints = 2 }
                          , new RoundScore { RoundNumber = 4, TotalPoints = 1 }
                      }  },
                      new UserModel { UserName = "testUser_3", RealName = "Test User No 3", 
                          RoundScores =  new List<RoundScore>
                      {
                            new RoundScore { RoundNumber = 1, TotalPoints = 3 }
                          , new RoundScore { RoundNumber = 2, TotalPoints = 5 }
                          , new RoundScore { RoundNumber = 3, TotalPoints = 6 }
                          , new RoundScore { RoundNumber = 4, TotalPoints = 6 }
                      }  }
                  };
              }
      

      Now let’s fire up our application, and browse to /api/results ( using Chrome ) to see what we get:

      image

      Defining TypeScript Backbone.Model classes to match our nested class structure.

      At this point, we will need some Backbone.Model classes and a Backbone.Collection to retrieve data from our /api/results url.  Backbone.Collections have a very simple method of retrieving data from REST services – simply specify the url property.  As an example, if we were to modify the NavBarButtonCollection (that we created in Part 1) to load data from REST services, we would do the following:

      class NavBarButtonCollection extends Backbone.Collection {
          constructor(options?: any) {
              super(options);
              this.model = NavBarButtonModel;
              this.url = "/api/navbars";
          }
      }

      So let’s create some Backbone.Models to emulate the C# POCO class structure for UserModel and RoundScore that we built in C#.  The trick here is to use TypeScript interfaces to define the relationships.  In the /tscode/models directory, create a new TypeScript file named UserResultModels.ts.   For simplicity, I have included the interfaces and Models in the same TypeScript file.  The interface definitions for the returned Json objects are shown below.  Note how we are defining the nested properties as arrays [ ]. Also, the property names must match exactly the C# POCO property names.

      interface IRoundScore {
          RoundNumber?: number;
          TotalPoints?: number;
      }
      
      interface IUserModel {
          UserName?: string;
          RealName?: string;
          RoundScores?: IRoundScore [];
      }

      Next, we create the Backbone.Model class based on these interfaces.  Note that the ES5 property getters and setters match the signatures of the interfaces.

      class RoundScore extends Backbone.Model implements IRoundScore {
          get RoundNumber(): number { return this.get('RoundNumber'); }
          set RoundNumber(value: number) { this.set('RoundNumber', value); }
      
          get TotalPoints(): number { return this.get('TotalPoints'); }
          set TotalPoints(value: number) { this.set('TotalPoints', value); }
      
          constructor(input: IRoundScore) {
              super();
              for (var key in input) {
                  if (key) { this[key] = input[key]; }
      
              }
          }
      }
      
      class UserModel extends Backbone.Model implements IUserModel {
          get UserName(): string { return this.get('UserName'); }
          set UserName(value: string) { this.set('UserName', value); }
      
          get RealName(): string { return this.get('RealName'); }
          set RealName(value: string) { this.set('RealName', value); }
      
          get RoundScores(): IRoundScore[] { return this.get('RoundScores'); }
          set RoundScores(value: IRoundScore[]) { this.set('RoundScores', value); }
      
          constructor(input: IRoundScore) {
              super();
              for (var key in input) {
                  if (key) { this[key] = input[key]; }
      
              }
          }
      }

      Now to create our collection: ( again in /tscode/models/UserResultModels.ts )

      class UserResultCollection extends Backbone.Collection {
          constructor(options?: any) {
              super(options);
              this.model = UserModel;
              this.url = "/api/results";
          }
      }

      As a quick test of this collection, lets load it in our MarionetteApp as a variable.  Note that we have specified a property to the fetch({ async: false }) function of the UserResultCollection .  This property will halt execution of the calling thread (not asynchronous) until the collection is loaded.  In general, it is better practise NOT to specify this parameter, unless absolutely neccesary.  the initializeAfter() function in MarionetteApp.ts is shown below:

          initializeAfter() {
              var navBarButtonCollection: NavBarButtonCollection = new NavBarButtonCollection(
                  [
                      { Name: "Home", Id: 1 },
                      { Name: "About", Id: 2 },
                      { Name: "Contact Us", Id: 3 }
                  ]);
      
              var navBarView = new NavBarCollectionView({ collection: navBarButtonCollection });
      
              navBarView.on("itemview:navbar:clicked", this.navBarButtonClicked);
      
              this.navbarRegion.show(navBarView);
      
              var resultsCollection = new UserResultCollection();
              resultsCollection.fetch({ async: false });
          }

      Before trying to debug this code, don’t forget to include the UserResultModel.js file in our Views/Home/Index.cshtml file:

          <script language="javascript" type="text/javascript" src="../../tscode/models/NavBarButtonCollection.js"></script>
          <script language="javascript" type="text/javascript" src="../../tscode/models/NavBarButtonModel.js"></script>
          
          <script language="javascript" type="text/javascript" src="../../tscode/models/UserResultModels.js"></script>

      Setting a breakpoint after the collection is loaded, and checking the resulting resultsCollection variable in Visual Studio should show that we have successfully loaded the json returned from our REST ApiController:

      skitch_screenshot_3

      But debugging and manually verifying that our collection is loaded correctly is just what it is : manual.  And manual is time-consuming, error-prone and just a pure pain.  So let’s write a unit-test to verify that our model is loading correctly from the C# DataController.

      Unit testing the Backbone Collection with jasmine

      To setup a unit test for our UserResultCollection, we will create a web-page named SpecRunner.html.  This is a simple web-page that just includes all of our required .js files, and then calls jasmine.execute(). 

      Firstly, create a /tscode/test directory – then add an html page to this directory named SpecRunner.html.  This file is very similar to /Views/Home/Index.cshtml, and should also include the meta tag http-equiv for IE.  Simply copy the <head> section from Index.cshtml.  As well as including all of our source .js files, we will also need to include /scripts/jasmine.js, , /scripts/jasmine-html.js, and also the /css/jasmine.css file as follows:

      <!DOCTYPE html>
      
      <html>
      <head>
          <meta http-equiv="X-UA-Compatible" content="IE=edge">
          <title>Index</title>
      
          <link rel="stylesheet" href="../../Content/bootstrap.css" type="text/css" />
          <link rel="stylesheet" href="../../Content/app.css" type="text/css" />
          
          <script language="javascript" type="text/javascript" src="../../Scripts/jasmine.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/jasmine-html.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/jasmine-jquery.js"></script>
      
          <script language="javascript" type="text/javascript" src="../../Scripts/jquery-1.9.1.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/json2.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/underscore.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/backbone.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/backbone.marionette.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/bootstrap.js"></script>
      
          <script language="javascript" type="text/javascript" src="../../tscode/MarionetteApp.js"></script>
      
          <script language="javascript" type="text/javascript" src="../../tscode/views/NavBarItemView.js"></script>
          <script language="javascript" type="text/javascript" src="../../tscode/views/NavBarCollectionView.js"></script>
      
          <script language="javascript" type="text/javascript" src="../../tscode/models/NavBarButtonCollection.js"></script>
          <script language="javascript" type="text/javascript" src="../../tscode/models/NavBarButtonModel.js"></script>
      
          <script language="javascript" type="text/javascript" src="../../tscode/models/UserResultModels.js"></script>
      </head>
      <body>
          <script type="text/javascript">
              var jasmineEnv = jasmine.getEnv();
              jasmineEnv.addReporter(new jasmine.HtmlReporter());
              jasmineEnv.execute();
          </script>
      </body>
      </html>

      Note that the jasmine-jquery.js file is not included via the nuGet package for jasmine-js.  You will need to download the file from here [ jasmine-jquery-1.3.1.js  ] – and then save it into your /Scripts directory.

      To run this file, simply right-click on it, and select the menu option Set As Start Page, and then hit F5 to debug.  But don’t do it yet – if you do – you will end up with a blank page.  Why ? Well, we havn’t written any jasmine tests yet.

      Writing Jasmine unit tests for our Backbone Collection.

      In the /tscode/test directory, create a directory named models.  Now create a TypeScript file for our UserResultCollection tests named UserResultCollectionTests.ts. , and include the reference paths for our definition files at the top.

      Jasmine tests all fall within a describe(‘ test suite name ’, () => { .. tests go here … }) block – which defines the test suite name.  Within this describe function, each test is defined with the syntax it(‘ test description ‘ , () => {  test goes here… }) function as follows:

      /// <reference path="../../../Scripts/typings/jquery/jquery.d.ts"/>
      /// <reference path="../../../Scripts/typings/underscore/underscore.d.ts"/>
      /// <reference path="../../../Scripts/typings/backbone/backbone.d.ts"/>
      /// <reference path="../../../Scripts/typings/marionette/marionette.d.ts"/> 
      /// <reference path="../../../Scripts/typings/jasmine/jasmine.d.ts"/> 
      /// <reference path="../../../Scripts/typings/jasmine-jquery/jasmine-jquery.d.ts"/> 
      
      describe('/tscode/test/models/UserResultCollectionTests.ts ', () => {
      
          it('should fail', () => {
              expect('undefined').toBe('defined');
          });
      
      });

      Now, just include the generated .js file in your SpecRunner.html file:

          <script language="javascript" type="text/javascript" src="../../tscode/models/NavBarButtonCollection.js"></script>
          <script language="javascript" type="text/javascript" src="../../tscode/models/NavBarButtonModel.js"></script>
      
          <script language="javascript" type="text/javascript" src="../../tscode/models/UserResultModels.js"></script>
          <script language="javascript" type="text/javascript" src="./models/UserResultCollectionTests.js"></script>

      Running the app now ( F5 ) should show the results of the jasmine tests:

      image

      Ok, now that we have Jasmine up and running, lets write some unit tests for our UserResultCollection.  Jasmine has two functions – beforeEach() and afterEach() that will run before each test, and after each test to perform initialization.  In beforeEach(), we setup our collection, and then we can re-use it in each of our tests as follows:

      describe('/tscode/test/models/UserResultCollectionTests.ts ', () => {
      
          var userResultCollection: UserResultCollection;
      
          beforeEach(() => {
              userResultCollection = new UserResultCollection();
              userResultCollection.fetch({ async: false });
          });
      
          it('should return 3 records from HomeDataController', () => {
              expect(userResultCollection.length).toBe(3);
          });
      
      });

      To find a specific instance in this collection, we can use the underscore.js functions where() and findWhere()where() will return a collection where all elements match the criteria, and findWhere() will return a single Model in our collection that matches the selection criteria:

          it('should find 1 UserModel with Name testUser_1', () => {
              var userModels = userResultCollection.where({ UserName: 'testUser_1' });
              expect(userModels.length).toBe(1);
      
          });
      
          it('should return a UserModel with Name testUser_1', () => {
              var userModel = userResultCollection.findWhere({ UserName: 'testUser_1' });
              expect(userModel).toBeDefined();
          });

      Jasmine tests can also be nested.  This means that we can describe( ) a set of tests that will use the parent’s beforeEach() and afterEach() functions to run the tests.   In this describe() block, we can also create beforeEach() functions.  This provides us with a handy way of testing a single model within the userResultCollection:

          it('should return a UserModel with Name testUser_1', () => {
              var userModel = userResultCollection.findWhere({ UserName: 'testUser_1' });
              expect(userModel).toBeDefined();
          });
      
          // this describe block is nested inside our main describe block
          describe(' UserModel tests ', () => {
              var userModel: UserModel;
              beforeEach(() => {
                  // the userResultCollection is setup in the parent beforeEach() function
                  userModel = <UserModel> userResultCollection.findWhere({ UserName: 'testUser_1' });
              });
      
              it('should set UserName property', () => {
                  expect(userModel.UserName).toBe('testUser_1');
              });
      
              // check that we are getting an array for our nested JSON objects
              it('should set RoundScores property', () => {
                  expect(userModel.RoundScores.length).toBe(4);
              });
          });

      Now lets use the same technique to get the third RoundScore model from the array of RoundScores for this UserModel;

              // check that we are getting an array for our nested JSON objects
              it('should set RoundScores property', () => {
                  expect(userModel.RoundScores.length).toBe(4);
              });
      
              // nested describe block re-uses the userModel set in parent beforeEach()
              describe('RoundScore tests', () => {
                  var roundScore: RoundScore;
                  beforeEach(() => {
                      roundScore = <RoundScore> userModel.RoundScores[2]; // get the third RoundScore
                  });
      
                  it('should have RoundNumber set to 3', () => {
                      expect(roundScore.RoundNumber).toBe(3);
                  });
                  it('should have TotalPoints set to 2', () => {
                      expect(roundScore.TotalPoints).toBe(2);
                  });
              });

      image

      Creating a Marionette.CompositeView to render the Backbone Collection in a table.

      So we are now confident that our Backbone Collection is working correctly.  Next step is to create a Marionette.CompositeView to render this collection in a table.  In the /tscode/views directory, create a new TypeScript file named UserResultViews.ts.  Once again, extend from Marionette.CompositeView, and set the options.template property:

      class UserResultsView extends Marionette.CompositeView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#userResultsViewTemplate";
              super(options);
          }
      }

      Next, update the Index.cshtml to provide an html snippet that matches the options.template property (#userResultsViewTemplate) above.  While we are at it, lets also create a new Marionette Region for our UserResultView to render into.  Modify the Index.cshtml as follows.  Don’t forget to include the new JavaScript file in the <head> element.

          <script language="javascript" type="text/javascript" src="../../tscode/views/UserResultViews.js"></script>
              <div class="container"> @* wrap the row with a container *@ 
                  <div class="row">
                      <div class="col-lg-12">
                          @*<h1>Hello Home Controller</h1> // old code *@
                          <div id="userResultRegion"></div> @*  new region  *@
                      </div>
                  </div>
              </div>
              
              @*  new template  *@ 
              <script type="text/template" id="userResultsViewTemplate">
                  This is the userResultsViewTemplate.
              </script>

      Next, we create need to modify our MarionetteApp to include the new region, create a UserResultView, and show this view in the region:

      class MarionetteApp extends Marionette.Application {
          navbarRegion: Marionette.Region;
          userResultRegion: Marionette.Region; // new region
          constructor() {
              super();
              this.on("initialize:after", this.initializeAfter);
              this.addRegions({ navbarRegion: "#navbarRegion" });
              this.addRegions({ userResultRegion: "#userResultRegion" }); // new region
          }
          initializeAfter() {
              var navBarButtonCollection: NavBarButtonCollection = new NavBarButtonCollection(
                  [
                      { Name: "Home", Id: 1 },
                      { Name: "About", Id: 2 },
                      { Name: "Contact Us", Id: 3 }
                  ]);
              var navBarView = new NavBarCollectionView({ collection: navBarButtonCollection });
              navBarView.on("itemview:navbar:clicked", this.navBarButtonClicked);
              this.navbarRegion.show(navBarView);
      
              var userResultView = new UserResultsView(); // create the new view
              this.userResultRegion.show(userResultView); // show the view
          }
          navBarButtonClicked(itemView: Marionette.ItemView, buttonId: number) {
              alert('Marionette.App handled NavBarItemView clicked with id :' + buttonId);
          }
      }

      If all goes well, we should see the new template displayed on the page:

      image

      Using a Marionette.CompositeView as an ItemView:

      Now that we have the top-level view rendering correctly, lets create another CompositeView to serve as the view for each user in our UserResultCollection.  Simply create another CompositeView named UserResultItemView, give it a template, and then set the parent itemView property to the new class name as follows:

      class UserResultsView extends Marionette.CompositeView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#userResultsViewTemplate";
              super(options);
              this.itemView = UserResultItemView; // set the child view here
          }
      }
      
      // new ItemView class
      class UserResultItemView extends Marionette.CompositeView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#userResultItemViewTemplate"; // new template
              super(options);
          }
      }

      Next, create the html for userResultItemViewTemplate in the Index.cshtml:

              <script type="text/template" id="userResultItemViewTemplate">
                  This is the userResultItemViewTemplate for : <%= UserName %>
              </script>

      Finally, construct and fetch a new UserResultCollection in the MarionetteApp, and pass this collection to the UserResultsView:

              var userResultCollection = new UserResultCollection();
              userResultCollection.fetch({ async: false });
      
              var userResultView = new UserResultsView({ collection: userResultCollection }); // pass in the collection
              this.userResultRegion.show(userResultView);

      Running our app now will render an item for each element found in our UserResultCollection:

      image

      Rendering nested Backbone.Collections.

      Cool.  So now we need another ItemView to render our RoundScores per user ( this is the nested collection within our Users collection.. All we need is a ResultItemView to render a single RoundScore, then set the parent itemView property to our new child view, exactly as we did before.  Remember to create an html template in our Index.cshtml a well:

      class UserResultItemView extends Marionette.CompositeView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#userResultItemViewTemplate"; 
              super(options);
              this.itemView = ResultItemView; // set the child view here
          }
      }
      
      // new ResultItemView class
      class ResultItemView extends Marionette.ItemView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#resultItemViewTemplate"; // new template
              super(options);
          }
      }

      Index.cshtml:

              <script type="text/template" id="resultItemViewTemplate">
                  This is the resultItemViewTemplate
              </script>

      Running our app now should show three ResultItemView s per user, right ?

      image

      Ok, so what went wrong ?

      In order to use an ItemView, the composite view needs it’s collection property set correctly.  Remember that when we instantiated the top level view, we passed our collection in the constructor:

      var userResultView = new UserResultsView({ collection: userResultCollection }); // pass in the collection

      Each item in this collection creates a new instance of the UserResultItemView class, and passes it the model to render.  So all we need to do is to set our collection property in the constructor, and create our own internal collection based on the incoming model.  Before we do this, however, lets just create a quick collection to hold RoundScores.  In /models/UserResultModels, create a new collection named RoundScoreCollection to hold RoundScore models as follows:

      class RoundScoreCollection extends Backbone.Collection {
          constructor(options?: any) {
              super(options);
              this.model = RoundScore;
          }
      }

      Now we can modify the constructor of UserResultItemView to set the collection

      class UserResultItemView extends Marionette.CompositeView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#userResultItemViewTemplate"; 
              super(options);
              this.itemView = ResultItemView; 
              // set internal collection:
              this.collection = new RoundScoreCollection(options.model.RoundScores);
          }
      }

      Running the app now will render correctly:

      image

      Using CompositeView properties to generate html

      One of the advantages of using Marionette.Composite views is the ability to control the rendered html.  Let’s update our template html and Views to render results in a table. 

      Firstly, modify the html template for userResultsViewTemplate to create a <thead> and <tbody> as follows:

              <script type="text/template" id="userResultsViewTemplate">
                  <thead>
                      <tr>
                          <th>UserName</th>
                          <th>1</th>
                          <th>2</th>
                          <th>3</th>
                          <th>4</th>
                      </tr>
                  </thead>
                  <tbody></tbody>
              </script>

      Obviously, we need to wrap this html in a topmost <table> tag – and render our child views within the <tbody> html region.  These two settings are made in the UserResultsView:

      class UserResultsView extends Marionette.CompositeView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#userResultsViewTemplate";
              options.tagName = "table"; // outer tag
              options.itemViewContainer = "tbody"; // itemview container
              super(options);
              this.itemView = UserResultItemView; 
          }
      }

      Now update the html template for userResultItemViewTemplate to wrap the UserName property in a <td> tag, and set the outer tagName for the UserResultItemView to <tr>:

              <script type="text/template" id="userResultItemViewTemplate">
                  <td><%= UserName %></td>
              </script>
      class UserResultItemView extends Marionette.CompositeView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#userResultItemViewTemplate";
              options.tagName = "tr"; // outer tagname
              super(options);
              this.itemView = ResultItemView; 
              this.collection = new RoundScoreCollection(options.model.RoundScores);
          }
      }

      Finally, update the html resultItemViewTemplate to render TotalPoints in a div, and update the tagName to use <td>:

              <script type="text/template" id="resultItemViewTemplate">
                  <div><%= TotalPoints %></div>
              </script>
      class ResultItemView extends Marionette.ItemView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#resultItemViewTemplate";
              options.tagName = "td"; // outer tagname
              super(options);
          }
      }

      Running our app now will create a table as follows:

      image

      Using bootstrap styles in a Marionette.CompositeView

      Finally, lets add some classes to our CompositeView to use bootstrap styles to render the table.  Update the UserResultsView and specify the className property:

      class UserResultsView extends Marionette.CompositeView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#userResultsViewTemplate";
              options.tagName = "table";
              options.className = "table table-hover"; // inject a class 
              options.itemViewContainer = "tbody"; 
              super(options);
              this.itemView = UserResultItemView; 
          }
      }

      Running the app now gives us our final result.  Rendered html based on nested Json from a WebAPI DataController:

      image

      And that wraps it up.

      Have fun,

      blorkfish.

      TypeScript : Using Backbone.Marionette and REST WebAPI (Part 1)

      This article ( Part 1 and Part 2) aims to walk the reader through setting up a Backbone.Marionette SPA application with Visual Studio 2013, and in particular, write Marionette JavaScript using TypeScript.  Generally, writing JavaScript applications relies on RESTful web services, so this blog will also aim to show how to accomplish this with ASP.NET WebAPI data controllers.

      Backbone.Marionette was designed to simplify large scale JavaScript applications.  After a mate of mine ( three votes, two votes, two votes one. ) loaned me a book on Marionette, I decided to give it a whirl.

      Being the unit-test junkie that I am, I also aim to show how to build JasmineJs unit-tests to test your REST services, and also show how to generate and use nested JSON objects.  Nested JSON data fits hand in glove with Backbone Models and TypeScript.

      Update : Many thanks to Alastair who pointed out some typos. These have now been fixed.

      Firstly, a few links:

      Backbone.Marionette.js: A gentle introduction is an excellent book by David Sulc, that I used to gently introduce me to Marionette. 

      Marionettejs.com is the official Marionette site – downloads and documentation.

      MarionetteJS 1.4.1 on nuget is the nuGet repository by benb1in.

      marionette.TypeScript.DefinitelyTyped is the nuGet repository for TypeScript definitions by Jason Jarrett

      DefinitelyTyped/marionette holds the TypeScript definition .d.ts files for Marionette, by sventschui.

      Twitter.Bootstrap is the nuGet repository for Bootstrap 3.0.1.1 by Jakob Thornton and Mark Otto

      bootstrap.TypeScript.DefinitelyTyped is the nuGet repository for TypeScript definitions of boostrap by Jason Jarrett

      As always, the full source code for this article can be found on github: blorkfish/typescript-marionette

      I have broken this article up over two parts, as we will be covering quite a few techniques and sample code – but fear not, it is quite fast-paced with tangible results every step of the way.  As an overview, we will accomplish the following:

      • Setting up a Visual Studio 2013 project.
      • Install required nuGet packages.
      • Creating the ASP.NET HomeController and View.
      • Including required javascript libraries in our Index.cshtml.
      • Creating a Marionette.Application.
      • Adding a Marionette Region
      • Referencing the Region in the Application
      • Creating a Marionette.View
      • Using bootstrap to create a clickable NavBar Button
      • Using Backbone Models to drive NavBar buttons.
      • Creating a Backbone.Collection to hold multiple Models
      • Creating a Marionette.CompositeView to render collections
      • Rendering Model Properties in templates
      • Using Marionette Events.
        At the end of Part 1, our awesome application will look like this : and use events to notify our App when a navbar button is clicked:

      image

      image

      In Part 2 of this article we will cover the following:

        • Creating an ASP.NET WebAPI Data Controller.
        • Unit testing the WebAPI Data Controller in C#
        • Modifying the WebAPI Data Controller to return a collection of nested C# POCO objects.
        • Defining TypeScript Backbone.Model classes to match our nested class structure.
        • Writing Jasmine unit tests for our Backbone Collection.
        • Creating a Marionette.CompositeView to render data in a bootstrap table.
        • Using a Marionette.CompositeView as an ItemView
        • Rendering nested Backbone.Collections
        • Using CompositeView properties to generate html.
        • Using bootstrap styles in a Marionette.CompositeView
        When we are complete with this tutorial, we will have generated a nested Json structure as follows: UserList –> User –> RoundScores –> RoundScore.
        We will then render it as follows:

      image

       

      Setting up a Visual Studio 2013 project.

      Firstly, let’s create a new Visual Studio project.  Make sure that you select an ASP.NET MVC 4 Web Application.  This will allow for the addition of WebAPI data controllers.

      image

      Personally, I prefer creating an Empty web application – as Visual Studio makes it very easy to add Controllers and Views later.  Change the View Engine to Razor.

      image

      Next, let’s switch to the .NET framework 4.5.1.  Right-click on the project and choose Properties.  Under Application, change the Target framework to .NET Framework 4.5.1:

      image

      Install required nuGet packages.

      Next, install the following nuGet packages.  Click on TOOLS | Library Package Manager | Package Manager Console, and type the following:

      Install-Package Backbone.Marionette

      Install-Package marionette.TypeScript.DefinitelyTyped

      Install-Package jasmine-js

      Install-Package jasmine.TypeScript.DefinitelyTyped

      Install-Package Twitter.Bootstrap

      Install-Package bootstrap.TypeScript.DefinitelyTyped

      Install-Package jasmine-jquery.TypeScript.DefinitelyTyped

      Install-Package json2

      Install-Package Newtonsoft.Json

      Install-Package Microsoft.AspNet.WebApi

      Install-Package underscore.TypeScript.DefinitelyTyped

      NOTE : Compiling the project now will generate about 120 compile errors, similar to the following:

      Build: Duplicate identifier ‘abort’. Additional locations

      To fix this, navigate to the Scripts / typings / jasmine directory, and delete the jasmine-1.3.d.ts file from the project.

      Creating the ASP.NET Home Controller and View.

      Next, we will need to create a Controller and View to serve up a simple web page.

      From Solution Explorer, right-click on the Controllers directory, and select Add | Controller.  Call it HomeController, and use the Empty MVC Controler Template:

      image

      Now create a Home directory under Views – and then right-click on the Home directory, and select Add | View .  Name the View “Index” as below:

      image

      Our solution explorer should look as follows:

      image

      Modify the Index.cshtml with some Hello world text as follows – just to check that we can hit this view:

      @{
          Layout = null;
      }
      
      <!DOCTYPE html>
      
      <html>
      <head>
          <meta name="viewport" content="width=device-width" />
          <title>Index</title>
      </head>
      <body>
          <div>
              <h1>Hello Home Controller</h1>
          </div>
      </body>
      </html>

      Running the application now ( hit F5 ) should successfully run the HomeController, and serve our Index.cshtml page:

      image

      Including required javascript libraries in our Index.cshtml.

      Our next step is to reference some of the javascript libaries that we downloaded via nuGet in order for Marionette to work correctly.  Note that nuGet has placed the downloaded javascript libaries in the folder /Scripts – and the downloaded TypeScript definition files in /Scripts/typings.

      Also, when running with Internet Explorer, we will need to ensure that IE uses the correct engine when parsing both the DOM and JavaScript.  This is accomplished by adding a meta tag to the page to force IE to use the latest JavaScript engine (by default, IE will revert to the IE 8 engine).

      Modify the Index.cshtml page to include the following javascript libaries – and add the meta tag for IE as follows:

      @{
          Layout = null;
      }
      
      <!DOCTYPE html>
      
      <html>
      <head>
          <meta http-equiv="X-UA-Compatible" content="IE=edge">
          <title>Index</title>
      
          <link rel="stylesheet" href="../../Content/bootstrap.css" type="text/css" />
      
          <script language="javascript" type="text/javascript" src="../../Scripts/jquery-1.9.1.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/json2.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/underscore.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/backbone.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/backbone.marionette.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/bootstrap.js"></script>
          
      </head>
      <body>
          <div>
              <h1>Hello Home Controller</h1>
          </div>
      </body>
      </html>

      Creating a Marionette.Application

      All Marionette SPAs start with a Marionette.Application.  The Marionette.Application has a number of responsibilities, including responding to application-wide events, and defining broad html “regions” that relate to specific application controllers and views.

      To create a Marionette Application, simply create a class that extends from Marionette.Application.

      This is one of the greatest advantages of using Backbone and Marionette with TypeScript.  The TypeScript extends keyword does exactly what you would expect it to do – it extends the definition of an existing class.  Just what you would expect from an Object Oriented language.  Unfortunately, most of the popular JavaScript libraries are not built this way, and use configuration settings to drive object behaviour.  Coming from a strongly typed background, I personally find it easier to understand and work with Backbone and Marionette because of this.

      I plan to write a blog fairly soon comparing how Backbone, ExtJs, AngularJs and Marionette shape up as compatible libraries when using TypeScript and Visual Studio as your main development tools.  In the  meantime, though, Marionette wins the race hands-down (IMHO).

      So back to the task at hand.  I prefer to keep all TypeScript code in a separate directory, named /tscode. Go ahead and create this directory, and then create a new TypeScript file in the /tscode directory named MarionetteApp.ts:

      Add | New Item | Visual C# | Web | TypeScript File:

      /// <reference path="../Scripts/typings/jquery/jquery.d.ts"/>
      /// <reference path="../Scripts/typings/underscore/underscore.d.ts"/>
      /// <reference path="../Scripts/typings/backbone/backbone.d.ts"/>
      /// <reference path="../Scripts/typings/marionette/marionette.d.ts"/>
      
      class MarionetteApp extends Marionette.Application {
          constructor() {
              super();
              this.on("initialize:after", this.initializeAfter);
          }
          initializeAfter() {
              alert("initializeAfter called");
          }
      }

      Now, lets include this file in the Index.cshtml.  Note that to start-up a Marionette application, we need to instantiate an instance of our Marionette.Application, and call the start() function:

      Modify your index.cshtml file – firstly to include the MarionettApp.js file in the <head> element, and then with a script down the bottom to instatiate the application, and call start() :

      @{
          Layout = null;
      }
      
      <!DOCTYPE html>
      
      <html>
      <head>
          <meta http-equiv="X-UA-Compatible" content="IE=edge">
          <title>Index</title>
      
          <link rel="stylesheet" href="../../Content/bootstrap.css" type="text/css" />
      
          <script language="javascript" type="text/javascript" src="../../Scripts/jquery-1.9.1.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/json2.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/underscore.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/backbone.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/backbone.marionette.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/bootstrap.js"></script>
          
          <script language="javascript" type="text/javascript" src="../../tscode/MarionetteApp.js"></script>
      </head>
          <body>
              <div>
                  <h1>Hello Home Controller</h1>
              </div>
      
              <script type="text/javascript">
                  var marionetteApp = new MarionetteApp();
                  marionetteApp.start();
              </script>
          </body>
      </html>

      Running the web application now should call the alert() function :

      image

      Adding a Marionette Region

      As mentioned before, one of the responsibilities of  a Marionette Application is to create and manage regions.  Think of a region as a broad section of your html that Marionette will inject DOM elements into.  These regions can be shown and hidden – and can even transition with JQuery animations.  To create a Region, simply add a div to your html page, and specify an id.   As we have already installed bootstrap with nuGet, let’s use it to create a navbar panel with a region in it. 

      Modify your Index.cshtml as follows:

       <body>
              
              <div class="navbar navbar-inverse navbar-fixed-top">
                  <div class="container">
                      <div class="row">
                          <div class="col-lg-6">
                              <div >TypeScript Marionette</div>
                          </div>
                          <div class="col-lg-6">
                              <div id="navbarRegion" >
                                  <p>navbarRegion</p>
                              </div>
                          </div>
                      </div>
                  </div>
              </div>        
              
              <div class="row">
                  <div class="col-lg-12">
                      <h1>Hello Home Controller</h1>    
                  </div>
              </div>        
      
              <script type="text/javascript">
                  var marionetteApp = new MarionetteApp();
                  marionetteApp.start();
              </script>
          </body>

      Note that we are starting to use some bootstrap styles here – creating a navbar, containers and rows.

      Unfortunately, running the app now will produce a dark navbar, with the navbar text very difficult to read, as well as overlapping our Hello Home Controller Text:

      image

      To fix this, create an app.css file in the /Content directory, with the following styles:

      body { 
         padding-top : 60px; 
       } 
      
      .app-navbar-text { 
         Font-family : Verdana,Arial,Sans-serif; 
         Font-size : 25px; 
         Font-style : Normal; 
         color : White; 
         padding-top : 5px; 
       } 
      

      Now include this app.css in the Index.cshtml file:

          <link rel="stylesheet" href="../../Content/bootstrap.css" type="text/css" />
          <link rel="stylesheet" href="../../Content/app.css" type="text/css" />
      

      And finally apply the app-navbar-text style to the divs in the navbar:

              <div class="navbar navbar-inverse navbar-fixed-top">
                  <div class="container">
                      <div class="row">
                          <div class="col-lg-6">
                              <div class="app-navbar-text">TypeScript Marionette</div>
                          </div>
                          <div class="col-lg-6">
                              <div id="navbarRegion" class="app-navbar-text">
                                  <p>navbarRegion</p>
                              </div>
                          </div>
                      </div>
                  </div>
              </div>        
      

      Your page should now at least be readable:

      image

      Referencing the Region in the Application

      To reference an html div as a Region in Marionette, we simply need to call the Marionette.Application function addRegions().  As this function is at the Application level, the easiest way to do this in TypeScript is to use a class property to store the Region as follows (note that the alert is now commented ) :

      class MarionetteApp extends Marionette.Application {
          navbarRegion: Marionette.Region;
          constructor() {
              super();
              this.on("initialize:after", this.initializeAfter);
              this.addRegions({ navbarRegion: "#navbarRegion" });
          }
          initializeAfter() {
              //alert("initializeAfter called");
          }
      }

      We will use this region as a content placeholder to render our Views a bit later on.

      Creating a Marionette.View

      In order to render something within the region, we will need to create a Marionette.ItemView.

      In much the same way as we provided a <div> as the html template for a Marionette.Region, we provide a <script> block with a type of “text/template” to a Marionette.ItemView to use as an html template.  Modify your index.cshtml file to include the following:

              <script type="text/template" id="navBarItemViewTemplate">
                  <p>NavBar View Template</p>
              </script>

      To create a Marionette.ItemView in TypeScript, simply derive a class from Marionette.ItemView.  To do this, create a views directory under /tscode, and add a new TypeScript file named NavBarItemView.ts.

      image

      The code for NavBarItemView.ts is shown below.  Note that the reference paths at the top of this file will need to change slightly in order to correctly reference .d.ts files, as we are now two directories up from the base directory.

      In the constructor we can see that the code is initializing the options parameter before it passes it to the base class.    To use the template that we created above, simply assign the template property of the options parameter to reference our <script> by id. :

      /// <reference path="../../Scripts/typings/jquery/jquery.d.ts"/>
      /// <reference path="../../Scripts/typings/underscore/underscore.d.ts"/>
      /// <reference path="../../Scripts/typings/backbone/backbone.d.ts"/>
      /// <reference path="../../Scripts/typings/marionette/marionette.d.ts"/> 
      
      class NavBarItemView extends Marionette.ItemView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#navBarItemViewTemplate";
              super(options);
          }
      }

      Now modify your index.cshtml file to include the generated NavBarItemView.js file:

          <script language="javascript" type="text/javascript" src="../../tscode/MarionetteApp.js"></script>
          <script language="javascript" type="text/javascript" src="../../tscode/views/NavBarItemView.js"></script>
      </head>
      

      Lastly, modify the MarionetteApp to call show() on this view:

      class MarionetteApp extends Marionette.Application {
          navbarRegion: Marionette.Region;
          constructor() {
              super();
              this.on("initialize:after", this.initializeAfter);
              this.addRegions({ navbarRegion: "#navbarRegion" });
          }
          initializeAfter() {
              //alert("initializeAfter called");
              this.navbarRegion.show(new NavBarItemView());
          }
      }

      If we run the application now ( hit F5 ) – we should see the “navBar region” text in the page replaced by the html we  specified in navBarItemViewTemplate :

      image

      Using bootstrap to create a clickable NavBar Button.

      Our NavBarItemView is all well and good, but let’s do something useful with it. 

      Most navigation bars are used to provide site-wide functionality – such as login / logout – so let’s update our NavBarItemView to show a boostrap button, and make it clickable.

      To render our NavBarItemView as a bootstrap button, our template simply needs to have a css class assigned to it.  Modify the constructor of NavBarItemView to set the options.classname as follows:

      class NavBarItemView extends Marionette.ItemView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#navBarItemViewTemplate";
              options.className = "btn btn-primary";
              super(options);
          }
      }

      Running the app now will assign the correct classnames to our template rendering it as a bootstrap button:

      image

      In order to make the button clickable, simply set the options.events property.  We will need to specify a function to call when the click event is fired.  This function can be called anything.  In the sample below, I’ve called it onClickEvent():

      class NavItemBarView extends Marionette.ItemView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#navBarItemViewTemplate";
              options.className = "btn btn-primary";
              options.events = { "click": this.onClickEvent };
              super(options);
          }
          onClickEvent() {
              alert('NavBarItemView clicked');
          }
      }

      Run the app now, click on the navbar button, and verify that the event is fired correctly:

      image

      How simple was that ?

      At this point it would be nice to render a couple of buttons in the navbar – but really this should be data-driven and not require too many changes to our ItemView.

      Enter Backbone models and collections.

      Let’s define a Backbone.Model to hold two properties for our buttons: Name and Id.  Then let’s create a collection of these models to drive rendering of multiple navbar buttons.

      Using Backbone Models to drive NavBar buttons:

      I’ve blogged previously about using strongly typed Backbone models with TypeScript. ( Can’t believe that was a year ago already ! ).   So if you would like a refresher then go an have a look.  Basically, we will be using ES5 syntax for model properties.  Go ahead and create a models directory under /tscode, and add a TypeScript file called NavBarButtonModel.ts.  Create a class that extends from Backbone.Model, called NavBarButtonModel.  Note that we also create a TypeScript interface definition for this model – which will help us when we start working with nested JSON and nested Backbone models and collections.

      The constructor takes the INavBarButtonModel as input, and then sets each of the properties in the for loop.  This simple technique of using ES5 syntax and the constructor ensures that the model is synched with the underlying Backbone get and set functions, as well as giving us full type safety.

      The source for NavBarButtonModel is as follows:

      /// <reference path="../../Scripts/typings/jquery/jquery.d.ts"/>
      /// <reference path="../../Scripts/typings/underscore/underscore.d.ts"/>
      /// <reference path="../../Scripts/typings/backbone/backbone.d.ts"/>
      /// <reference path="../../Scripts/typings/marionette/marionette.d.ts"/> 
      
      interface INavBarButtonModel {
          Name?: string;
          Id?: number;
      }
      
      class NavBarButtonModel extends Backbone.Model implements INavBarButtonModel {
          get Name(): string { return this.get('Name'); }
          set Name(value: string) { this.set('Name', value); }
      
          get Id(): number { return this.get('Id'); }
          set Id(value: number) { this.set('Id', value); }
      
          constructor(input: INavBarButtonModel) {
              super();
              for (var key in input) {
                  if (key) { this[key] = input[key]; }
      
              }
          }
      }

      Creating a Backbone.Collection to hold multiple Models.

      Now that we have a Backbone.Model defined, lets define a Backbone.Collection to hold multiple buttons.  Create a NavBarButtonCollection.ts file in the models directory.  To define a Backbone.Collection, all we need to do is extend from Backbone.Collection, and set our model property to the name of the model class as follows.  Note that we have added a reference path at the top of the file to point to our model’s TypeScript file:

      /// <reference path="../../Scripts/typings/jquery/jquery.d.ts"/>
      /// <reference path="../../Scripts/typings/underscore/underscore.d.ts"/>
      /// <reference path="../../Scripts/typings/backbone/backbone.d.ts"/>
      /// <reference path="../../Scripts/typings/marionette/marionette.d.ts"/> 
      /// <reference path="./NavBarButtonModel.ts" />
      
      class NavBarButtonCollection extends Backbone.Collection {
          constructor(options?: any) {
              super(options);
              this.model = NavBarButtonModel;
          }
      }

      This collection will eventually be populated by JSON retrieved from a WebAPI service. But for now we can create a new collection as in the code below.  Note that this code is just a sample of how to create a collection – we will include it in the MarionetteApp.ts file a little later.:

              var navBarButtonCollection: NavBarButtonCollection = new NavBarButtonCollection(
                  [
                      { Name: "Home", Id: 1 },
                      { Name: "About", Id: 2 },
                      { Name: "Contact Us", Id: 3 }
                  ]);
      

        But once we have a collection of NavBarButtonModels, we will need a new View to render the collection.

      Creating a Marionette.CompositeView to render collections.

      We now have most of the building blocks in place in order to render this collection on our page.  The final piece is a view that will take the NavBarButtonCollection as unput, and then instantiate a NavBarItemView for each model found in the collection.

      Let’s create a new Marionette.CompositeView for this purpose.    As usual, we first need to create a <script> region in our Index.cshtml file – which will serve as the html template for our composite view.  Modify the Index.cshtml file to add a template with the id of navBarCollectionViewTemplate .  For the moment, we will simply define the template, but leave it blank as follows:

              <script type="text/template" id="navBarCollectionViewTemplate">
              </script>
      

      Next, create a new TypeScript file under /tscode/views named NavBarCollectionView.ts.  This class will extend from Marionette.CompositeView.   As usual, specify the name of the html template in the options.template property as below:

      Now we need to set the the itemView property to the class name of the view that is responsible for rendering an item – which in our case is NavBarItemView:

      /// <reference path="../../Scripts/typings/jquery/jquery.d.ts"/>
      /// <reference path="../../Scripts/typings/underscore/underscore.d.ts"/>
      /// <reference path="../../Scripts/typings/backbone/backbone.d.ts"/>
      /// <reference path="../../Scripts/typings/marionette/marionette.d.ts"/> 
      /// <reference path="./NavBarItemView.ts"/> 
      
      class NavBarCollectionView extends Marionette.CompositeView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#navBarCollectionViewTemplate";
              super(options);
              this.itemView = NavBarItemView;
          }
      }

      Next, we will need to include the new .js files in our Index.html <head> section:

          <script language="javascript" type="text/javascript" src="../../tscode/MarionetteApp.js"></script>
      
          <script language="javascript" type="text/javascript" src="../../tscode/views/NavBarItemView.js"></script>
          <script language="javascript" type="text/javascript" src="../../tscode/views/NavBarCollectionView.js"></script>
          
          <script language="javascript" type="text/javascript" src="../../tscode/models/NavBarButtonCollection.js"></script>
          <script language="javascript" type="text/javascript" src="../../tscode/models/NavBarButtonModel.js"></script>
      

      Finally, we need to create an instance of the collection and pass it to the NavBarCollectionView.  Modify the MarionetteApp.ts file as shown below. 

      Note that we create a NavBarButtonCollection by simply passing an array of objects – very similar to what raw JSON would look like.  Also, we create a NavBarCollectionView and pass it the collection in another JSON style object – by setting the collection property to the newly created NavBarButtonCollection:

      class MarionetteApp extends Marionette.Application {
          navbarRegion: Marionette.Region;
          constructor() {
              super();
              this.on("initialize:after", this.initializeAfter);
              this.addRegions({ navbarRegion: "#navbarRegion" });
          }
          initializeAfter() {
              var navBarButtonCollection: NavBarButtonCollection = new NavBarButtonCollection(
                  [
                      { Name: "Home", Id: 1 },
                      { Name: "About", Id: 2 },
                      { Name: "Contact Us", Id: 3 }
                  ]);
      
              this.navbarRegion.show(new NavBarCollectionView({ collection: navBarButtonCollection }));
          }
      }

      Firing up the app now should show us three buttons in our navbar – although it’s not quite what we envisaged.

      image

      Rendering Model Properties in templates.

      What we really want here is to modify our ItemView template to display the Name value of the NavBarButtonModel instead of NavBarItem View Template.

      Thankfully, it’s a piece of cake.

      Modify the <script type="text/template" id="navBarItemViewTemplate"> tag in Index.cshtml as follows:

              <script type="text/template" id="navBarItemViewTemplate">
                  <%= Name %>
              </script>

      Running the app now will render the button names based on the model properties:

      image

      Finally, lets modify our click event in NavBarItemView to read the Id from the NavBarButtonModel:  You may notice that the call to get the Id property from the model is NOT using ES5 syntax.  As far as I understand, this is because Backbone is using the base Backbone.Model class internally, and therefore relies on the base get(‘attribute’) functions.

      class NavBarItemView extends Marionette.ItemView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#navBarItemViewTemplate";
              options.className = "btn btn-primary";
              options.events = { "click": "onClickEvent" };
              super(options);
          }
          onClickEvent() {
              alert('NavBarItemView clicked with id :' + this.model.get('Id'));
          }
      }

      Clicking on any one of the buttons will now show a message with the model’s Id.

      image

      Triggering a Marionette event.

      For the final exercise of Part 1, let’s use Marionette events to notify the MarionetteApp when someone clicks on a Navbar menu item.

      Modify the onClickEvent() of the NavBarItemView to call Marionette’s trigger() function.  When triggering an event, we will need an event name (which can be anything), and we can also attach any data we need to the event.  In the code below, we are attaching the Model’s Id to the event.

      class NavBarItemView extends Marionette.ItemView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#navBarItemViewTemplate";
              options.className = "btn btn-primary";
              options.events = { "click": "onClickEvent" };
              super(options);
          }
          onClickEvent() {
              this.trigger("navbar:clicked", this.model.get('Id'));
          }
      }

      Listening to a Marionette.Event

      To handle this even, we will make some changes to the MarionetteApp.  Basically, just call the

      on(‘eventname’,callback) method on the view, and provide a function callback (this.navBarButtonClicked).  Note too that the listening eventName is slightly different to the trigger eventName: itemview:navbar:clicked (handler) as opposed to just navbar:clicked ( trigger ).  This is because Marionette automatically attaches itemview: to any event that is fired by an ItemView.

      Also note the signature of the callback method: we have two parameters – itemView and buttonId.  Again, Marionette always sends a handle to the originating ItemView as the first parameter to an event handler.  The second parameter therefore is our Model’s Id.

      class MarionetteApp extends Marionette.Application {
          navbarRegion: Marionette.Region;
          constructor() {
              super();
              this.on("initialize:after", this.initializeAfter);
              this.addRegions({ navbarRegion: "#navbarRegion" });
          }
          initializeAfter() {
              var navBarButtonCollection: NavBarButtonCollection = new NavBarButtonCollection(
                  [
                      { Name: "Home", Id: 1 },
                      { Name: "About", Id: 2 },
                      { Name: "Contact Us", Id: 3 }
                  ]);
      
              var navBarView = new NavBarCollectionView({ collection: navBarButtonCollection });
      
              navBarView.on("itemview:navbar:clicked", this.navBarButtonClicked);
      
              this.navbarRegion.show(navBarView);
          }
      
          navBarButtonClicked(itemView: Marionette.ItemView, buttonId: number) {
              alert('Marionette.App handled NavBarItemView clicked with id :' + buttonId);
          }
      
      }

      Running our App now – and clicking on a NavBarButton will then fire a Marionette event, which is then handled by the Marionette.App itself:

      image

      Well, that’s it for Part 1 of this article.

      As mentioned, Part 2 of this article we will cover the following:

      • Creating an ASP.NET WebAPI Data Controller.
      • Unit testing the DataController in C#
      • Updating the NavBarButtonCollection to use our DataController.
      • Unit testing the NavBarButtonCollection using Jasmine.
      • Creating a Marionette.CompositeView to render data in a bootstrap table.

      Have fun,

      blorkfish.

      Setting up TypeScript and AngularJs in Visual Studio 2013

      Update : 6 Nov 2014

      Update: Before you begin

      In the 9 months or so since this blog post was published, WordPress has recorded over 10,000 page hits on this blog entry alone.  I wish it wasn’t so.  My experience with AngularJs (1.x) was frustrating to say the least.  Soon after posting this blog, I dropped AngularJs as a framework.  There were just too many things that should have been simple that I spent hours and hours trying to figure out – and still couldn’t get working.

      As an example, when I tried to run an integration test to fetch json from my back-end database, I could not.  Every article I read said that I should mock out the http request and return mock data.  That is not the point of an integration test.  I need to ensure that the json returned from my service still works with my beautifully hand-crafted AngularJs application.  Angular’s unit testing framework would not allow me to do this.  So I gave up in favour of simpler, more object-oriented, more testable and eventually more explainable JavaScript framework.  But I did give it a good go.

      A work colleague of mine sent me a link to a blog post today that just resonated with my own experiences.  Lars Eidnes has persisted with the AngularJs framework for a long time, and posted a blog entitled AngularJS: The Bad Parts.  It makes for a very interesting read.

      In the end, choosing a framework is part personal choice, part experience, part guesswork.  My personal choice is not AngularJs.

      Update: Angular 2.0

      The recent announcement that the Microsoft and Angular teams have been working together on the Angular 2.0 framework is good news indeed.  The upcoming 1.5 version of the TypeScript compiler will include the first language structures to support Angular 2.0 syntax, and will include some elements of the AtScript language.  It will be a very interesting update,  with some very powerful new features.  Interesting times ahead, indeed.

      Mastering TypeScript Book : available April 2015

      Over the past couple of months I have been working very closely with the publishing team at PAKT Publishing on a new book called “Mastering TypeScript”.  It is scheduled for publication in April 2015.  The concepts and ideas of this blog post have all been updated and expanded within the book.  You can read all about it here : https://www.packtpub.com/web-development/mastering-typescript

      B03967_MockupCover_Normal 

      Original Post:

      I have recently begun working through the AngularJs tutorials, and quickly found that they are geared towards development environments other than Visual Studio.  With the explosion of type definitions available via NuGet and DefinitelyTyped, I wanted to work through the process of getting up and running in AngularJs with a full Visual Studio 2013 development environment – complete with TypeScript debugging through F5.

      As one of my passions is unit-testing, I was particularly interested in AngularJs’ methodology for writing and running Jasmine unit tests, as well as writing and running e2e tests. In addition, debugging unit tests and e2e tests was a must-have – thereby providing a complete Visual Studio development environment.

      This blog therefore serves as a guide on how to setup TypeScript, AngularJs and Visual Studio 2013, including debugging Jasmine unit tests, debugging AngularJs e2e tests, and configuring a Continuous Integration build server ( TeamCity ).

      Firstly, a few links:

      AngularJs tutorial – this is the official AngularJs tutorial that I worked through in this blog.  Note that the source-code on git-hub currently covers step_00 through step_04.  Future blog entries will discuss step_05 and beyond.

      The completed source-code for this project can be found on git-hub: blorkfish/TypeScriptAngularTutorial

      I’ll follow the following steps :

      • Creating a TypeScript Project
      • Using NuGet to install AngularJs, boostrap, jasmine, and DefinitelyTyped definitions.
      • Writing TypeScript versions of the AngularJs controllers and unit tests.
      • Using node to setup karma
      • Debugging with Visual Studio 2013.
      • Debugging Unit tests with IE.
      • Debugging e2e tests with IE.
      • Notes on running tests on a CI build server.

      Creating a TypeScript Project.

      Let’s go ahead and create a TypeScript project in Visual Studio 2013.  Create a TypeScript Project.  For the purposes of this blog, I’ve called my project TypeScriptAngularTutorial.

      image

      Upgrade to .NET Framework 4.5

      Note that to use the new Microsoft ASP.NET Web API 2.1, we’ll need to upgrade the project to use .NET 4.5.  Simply right-click on the project, select Properties, and modify the Target framework to .Net Framework 4.5:

      image

      Using NuGet to install dependencies.

      Click on Tools | Library Package Manager | Manage NuGet packages for solution.  In the search box, simply type AngularJs:

      I used the Angular Js package created by Fitzchak Yitzchaki, as it includes all of the angular js files in one hit:

      image

      Next, we need the angularjs.TypeScript.DefinitelyTyped package – as seen at the bottom of the list in the above screenshot.

      Your project will now include a Scripts folder, with all of the relevant angular .js files, as well as a /Scripts/typings folder with all of the relevant .d.ts definition files for TypeScript:

      image

      Next, install NuGet packages bootstrap, jasmine, and jasmine.TypeScript.DefinitelyTyped.

      image

      image

      Resolving issues with conflicting typings

      At this stage, compiling the project will result in various errors relating to Duplicate Identifiers:

      Duplicate identifier ‘describe’, ‘xdescribe’, ‘it’, ‘iit’, etc. etc.

      This is due to multiple .d.ts files including definitions for the jasmine libraries.  To resolve this, edit the /Scripts/typings/angular-scenario.d.ts, and comment the definitions for describe etc. at the bottom of the file, and remove the /scripts/typings/jasmine/jasmine-1.3.d.ts file from the project.

      image

      Compiling the project now will produce 3 errors, all related to JasmineControler.cs.

      image

      This can be resolved by simply removing the JasmineController.cs file from the project.  It seems that the installation of Jasmine.js automatically adds a Jasmine MVC controller.  Since this is not an MVC project, the relevant MVC references have not been included.

      Writing TypeScript versions of AngularJs Controllers and Unit Tests.

      At this stage, we are ready to start writing AngularJs code.  Step_00 of the AngularJs Tutorial explains how to create app/index.html, and include the relevant angular.js files in order to see your first AngularJs application up and running.  The only difference between the Tutorial code and the Visual Studio version is the location of the AngularJs files.

      Directory Structure

      Using NuGet within Visual Studio will automatically create a /Scripts folder, and place JavaScript files in this folder.  NuGet will also place .css files in the /Content/ directory, and the TypeScript DefinitelyTyped definition files (.d.ts) will be downloaded in /Scripts/typings.

      I have therefore kept the directory structure used in the tutorial the same as documented on the AngularJs site, and changed the references to /app/lib/ … to use the DifinitelyTyped structure of /Scripts/ …

      Css files are split into two directories – those that are included with NuGetPackages are found in /Content, while those that you write yourself as part of the app I have put into the /app/css directory.

      The script references for /app/default.html file for Step_00 are therefore as follows:

      <head>
          <title>Tutorial app/index.html</title>
          <link rel="stylesheet" href="css/app.css"/>
          <link rel="stylesheet" href="../Content/bootstrap.css" />
          <script src="../Scripts/angular.js"></script>
      </head>
      

      Here is the resultant directory structure:

      image

      Debugging in Visual Studio

      Note that setting /app/index.html as your project startup file, and debugging in Internet Explorer ( by hitting F5 ) may not currently have the desired effect.

      If your page does not work correctly, i.e. shows Nothing here {{‘yet’ + ‘!}}, the problem is that IE is interpreting the page as an IE 7 standards page.

      Include the <meta htp-equiv > tag line at the top of your /app/index.html page to force IE to use the latest version of the standards:

      <!DOCTYPE html>
      <meta http-equiv="X-UA-Compatible" content="IE=Edge">
      <html lang="en" ng-app="phonecatApp">
      <head>

      Writing the AngularJs Controller in TypeScript

      In step_02 of the AngularJs Tutorial, you will begin writing an Angular Controller.  To do this in TypeScript, simply create a class, with a constructor that uses the $scope keyword.  Remember to include the angular.d.ts file for type definitions.

      The full version of step_02 controller.ts is as follows:

      /// <reference path="../../Scripts/typings/angularjs/angular.d.ts"/>
      
      var phonecatApp = angular.module('phonecatApp', []);
      
      class PhoneListCtrl  {
          constructor($scope) {
              $scope.phones = [
                  { 'name': 'Nexus S', 'snippet': 'Fast just got faster' },
                  { 'name': 'Motorola', 'snippet': 'Next generation tablet' },
                  { 'name': 'Motorola Xoom', 'snippet': 'Next, next generation tablet' }
              ];
          }
      };

      If you would like to see the generated JavaScript version of this file, simply click on your project, and then “Show all files”.

      At this stage, you should be able to put a breakpoint on the constructor above, hit F5, and debug directly into your TypeScript code.

      Using node to setup Karma.

      Karma setup is very straight-forward – simply install node and then install karma as directed in the tutorial.  NodeJs is a windows installer, and once installed, simply type

      npm install -g karma

      You will also need the karma-ng-scenario module.  If you would like to run tests in both Chrome ( the default ) and IE, simply install the karma-ie-launcher.  I use TeamCity for a CI build server, so also need karma-teamcity-reporter:

      npm install -g karma-teamcity-reporter
      npm install -g karma-ie-launcher
      npm install -g karma-ng-scenario
      npm install -g karma-junit-reporter

      Note that on build servers, you will need to run the above commands while logged in as the account that executes the build.  Simply running the above as an Administrator will not install karma for all logged in users.

      Running unit tests with Karma

      With a little tweaking to the /test/config/karma.conf.js file to take into account the modified directory structure, running unit tests using karma is pretty simple – just navigate to the /test/scripts directory, and run test.bat.

      Note that on build servers, simply append the —single-run parameter to the karma command line, which will run the tests once, and then quit the batch file.

      Debugging TypeScript Unit Tests in Visual Studio.

      Unfortunately, debugging TypeScript unit tests requires a running instance of Internet Explorer.  Similar to the /app/index.html file, we will need an .html file that we can set as the project startup file, and then simply hit F5 to start debugging unit tests.

      In the source provided with this article, I have created two SpecRunner.html files – one for standard Jasmine unit tests, and the other for e2e AngularJs tests.  Instead of using karma as a test runner in this instance, we will simply use Jasmine.

      /test/SpecRunner.html is as follows:

      <!DOCTYPE html>
      <meta http-equiv="X-UA-Compatible" content="IE=Edge">
      <html>
      <head>
          <title>Partner Settings Test Suite</title>
          <!-- include your script files (notice that the jasmine source files have been added to the project) -->
          <script type="text/javascript" src="../Scripts/jasmine/jasmine.js"></script>
          <script type="text/javascript" src="../Scripts/jasmine/jasmine-html.js"></script>
          <script type="text/javascript" src="../Scripts/angular.js"></script>
          <script type="text/javascript" src="../Scripts/angular-mocks.js"></script>
          <script type="text/javascript" src="../app/ts/controllers.js"></script>
          <script type="text/javascript" src="unit/controllersSpec.js"></script>
          <link rel="stylesheet" href="../Content/jasmine/jasmine.css" />
      </head>
      <body>
          <!-- use Jasmine to run and display test results -->
          <script type="text/javascript">
              var jasmineEnv = jasmine.getEnv();
              jasmineEnv.addReporter(new jasmine.HtmlReporter());
              jasmineEnv.execute();
          </script>
      </body>
      </html>

      Note that we have included jasmine.js and jasmine-html.js ( as well as the jasmine.css ) into the page, along with the required angular and controller javascript files.

      Running the project with /test/SpecRunner.html as your startup page will allow you to set breakpoints in any of the above included javascript or typescript files.

      image

      Debugging TypeScript e2e tests with Visual Studio

      Setting /test/e2e/SpecRunner.html as your project startup file will also enable debugging of e2e tests, when hitting F5 in Visual Studio.

      image

      If you would like to run the /test/script/test_e2e.bat file, just keep in mind that this requires a running web-server.  In Visual Studio, you need to hit F5 to start up IISExpress before running the batch file.  Otherwise, you will get an error as follows:

      image

      Note that in the /test/config/karma.e2e.js file there is a reference to proxies.  This sets the web-server site and port for the e2e tests to a running web-server instance.  In the properties of the project file, I have set the IIS Express port to be 53722, and this is the value of the proxies setting:

      image

      module.exports = function (config) {
          config.set({
              basePath: '../../',
      
              files: [
                'test/e2e/**/*.js'
              ],
              
              urlRoot: '/_karma_/',
      
              autoWatch: false,
              
              singleRun: true,
              
              proxies : {
                  '/' : 'http://localhost:53722'
              },

      Running tests on a CI build server.

      To conclude, just a few notes on running tests on a CI build server.

      In order to run unit tests on a CI build server ( such as TeamCity ), bear in mind the following:

      • Use the –-single-run parameter on the karma command line to run unit tests once only.
      • Depending on which build server you have, you may need either karma-junit-reporter, or karma-teamcity-reporter to report on unit test results correctly.
      • To launch tests in multiple browsers, make sure that you have installed the relevant launcher on the build server ( karma-chrome-launcher, karma-firefox-launcher, karma-ie-launcher).
      • Make sure that you log in as the account that runs the build to install karma and relevant reporters and launchers.
      • For e2e tests, a full version of the running web-site is required.
      • We use the Visual Studio packaging mechanism and msdeploy for automatic deployment on build servers.
      • Point your proxies setting in karma.e2e.conf.js to the web-site as above.
      • For web-servers that are behind a firewall, have a look at using cntlm local proxies.  This allows you to use your normal login on a server that does not have internet access.  All authentication is via a hash-key – so no username / passwords are in plain text.
        Have fun,
        – blorkfish.

      TypeScript strongly typed Backbone Models

      TypeScript and Backbone are a great fit in terms of writing simple, Object-Oriented code using JavaScript.  The simplicity of Backbone, coupled with the TypeScript generated closure syntax allow one to simply use the TypeScript extends keyword to derive from any of Backbone’s base classes – and start implementing functionality immediately.  This is a very natural way to write Object-Oriented JavaScript:

      class ListItem extends Backbone.Model {
      
      }

      The problem with Backbone Models

      Unfortunately, Backbone uses object attributes to store Model properties, and these need to be set in order for the Backbone model to work correctly.  The following code shows how to set Backbone Model properties:

      class ListItem extends Backbone.Model {
          constructor() {
              super();
              this.set('Id', '2');
          }
      }
      

      The nature of the set and get functions of Backbone, however do not have any inherent type-safety.  These model properties are also not exposed as first-class object properties, and must always be accessed via the setter and getter functions.  A more natural way of expressing Backbone Models would be as follows:

      class ListItem extends Backbone.Model {
          Id: number;
          Name: string;
          constructor() {
              super();
              this.Id = 2;
              this.Name = "ModelName";
          }
      }

      Using ES5 getter and setter syntax

      The above effect can be achieve by using ES5 getter and setter syntax.  By defining a get and set function for each property, and then in turn calling the Backbone get and set functions, we can keep our Backbone Model in-synch with our TypeScript properties.

      Note that you will need to switch your TypeScript project properties to compile to ES5 in order for the following code to work:

      class ListItem extends Backbone.Model {
          get Id(): number {
              return this.get('Id');
          }
          set Id(value: number) {
              this.set('Id', value);
          }
          set Name(value: string) {
              this.set('Name', value);
          }
          get Name(): string {
              return this.get('Name');
          }
      
          constructor() {
              super();
              this.Id = 1;
              this.Name = "ModelName";
          }
      }
      

      Model Type-safety

      We can further improve our model’s type-safety by using an interface – both for the implements keyword ( forcing our class to implement getters and setters for each interface property – and also for the object constructor.

      Consider the following code:

      interface IListItem {
          Id: number;
          Name: string;
          Description: string;
      }
      
      class ListItem extends Backbone.Model implements IListItem {
          get Id(): number        { return this.get('Id'); }
          set Id(value: number)   { this.set('Id', value); }
          set Name(value: string) { this.set('Name', value); }
          get Name(): string      { return this.get('Name'); }
      
          constructor(input: IListItem) {
              super();
              this.Id = input.Id;
              this.Name = input.Name;
          }
      }
      

      Note that we have defined an interface ( IListItem ), and forced the ListItem object to implement the interface.  We have also added the interface to the constructor, further enhancing our type-safety.

      This ensures that changes to the interface will generate compile-time errors if the object does not have corresponding getter and setter functions, and also ensures that any object passed via the constructor will have all properties defined.

      Simplifying the constructor

      For a Backbone model with many properties, the constructor can further be simplified by looping through the properties of the input parameter as follows:

      interface IListItem {
          Id: number;
          Name: string;
      }
      
      class ListItem extends Backbone.Model implements IListItem {
          get Id(): number { return this.get('Id'); }
          set Id(value: number) { this.set('Id', value); }
          set Name(value: string) { this.set('Name', value); }
          get Name(): string { return this.get('Name'); }
      
          constructor(input: IListItem) {
              super();
              for (var key in input) {
                  if (key) {
                      this[key] = input[key];
                  }
              }
          }
      }

      A Jasmine Unit test

      The above code will allow for both standard get and set Backbone syntax, as well as type-safe TypeScript syntax as shown in the following unit test:

      describe("SampleApp : tests : models : ListItem_tests.ts ", () => {
          it("can construct a ListItem model", () => {
              var listItem = new ListItem(
                  {
                      Id: 1,
                      Name: "TestName",
                  });
              expect(listItem.get("Id")).toEqual(1);
              expect(listItem.get("Name")).toEqual("TestName");
      
              expect(listItem.Id).toEqual(1);
      
              listItem.Id = 5;
              expect(listItem.get("Id")).toEqual(5);
      
              listItem.set("Id", 20);
              expect(listItem.Id).toEqual(20);
          });
      
      });

      Note how a List Item is constructed with a standard JavaScript object definition.  Also, both listItem.Id or listItem.set(‘Id’, 20) syntax can be used to interact with a type-safe TypeScript Backbone model.

      Have fun,

      blorkfish.

      Note : This blog post was as a result of a question posted on stackoverflow.com

      Using ExtJs with TypeScript

      Since the release of TypeScript there has been an explosion of JavaScript libraries that have had TypeScript definition files written for them.  A significant number can be found on borisjankov’s github repository  ( DefinatelyTyped ).

      Unfortunately, there are currently no definition files for ExtJs in this repository.

      Kudos to Mike Aubury (zz9pa) who has been able to use jsduck to reverse-engineer an ExtJs definition file from the ExtJs documentation, and load the project into his github repository ( extjsTypescript ).  For the purpose of this blog, I have had to modify Mike’s ExtJs.d.ts file generation slightly, just to mark each property and function as optional.  Read on to find out why.

      This blog is the results of my initial findings attempting to use zz9pa’s ExtJs definitions with TypeScript.

      ExtJs Class Structure

      Lets have a look at a simple ExtJs Application :

      Ext.application(
          {
              name: 'SampleApp',
              appFolder: '/code/sample',
              controllers: ['SampleController'],
              launch: () =>  {
      
                  Ext.create('Ext.container.Viewport', {
                      layout: 'fit',
                      items: [{
                          xtype: 'panel',
                          title: 'Sample App',
                          html: 'This is a Sample Viewport'
                      }]
                  });
      
              }
      
          }
      );

      Note that the structure of ExtJs javascript is to instantiate objects with a configuration block as follows:

      Ext.application(
          { 
              // Ext.application config block
          } 
      );
      Ext.create('Ext.container.Viewport', 
          {
              // viewport config block
          }
      );
      

      Compile time type-casting

      The only way to utilize the powerful TypeScript benefits (i.e. type safety) is to manually type-cast these configuration blocks to the correct type by using compile time type casting as follows:

      Ext.application(
          <Ext_app_Application>{ 
              // Ext.application config block
              // now has intellisense and type casting
          } 
      );
      

      With the above modification to the config block, we now have full intellisense, and type checking within our code.

      A simple ExtJs Application with type-casting.

      Let’s have a look at our sample application now utilizing the type-casting method as described above:

      Ext.application(
          <Ext_app_Application> { // config block cast
              name: 'SampleApp',
              appFolder: '/app/sampleapp',
              controllers: ['SampleController'],
              launch: () =>  {
      
                  Ext.create('Ext.container.Viewport', 
                      <Ext_container_Viewport>{ // config block cast 
                      layout: 'fit',
                      items: [<Ext_panel_Panel>{
                          xtype: 'panel',
                          title: 'Sample App',
                          html: 'This is a Sample Viewport'
                      }]
                  });
              }
          }
      );

      ExtJs_Intellisense

      Modifications to ExtJs.d.ts

      In order to use the module definition for ExtJs ( extjsTypescript ) with this style of coding, we will need to modify the generated definition file to mark each property and function as optional :

      Instead of this ( as an example)

      interface Ext_AbstractPlugin extends Ext_Base {
         pluginId : String;
      }

      We need this:

      interface Ext_AbstractPlugin extends Ext_Base {
         pluginId ? : String;
      }

      Note the ? optional flags for each of the properties and methods.

      I have made some quick modifications to zz9pa’s code in order to make each property and function optional.  The Ext.d.ts file is included in the source download accompanying this blog.

      The Ext namespace

      The final modification that we need for the ExtJs definition file is for the Ext namespace itself.  This is where the majority of the work will be required to add further properties and function definitions for the Ext namespace.

      Putting together this blog, and getting some samples up and running, I have defined only a tiny sub-set of the Ext namespace – mostly what I have needed to build a very simple application, and to start writing unit tests in Jasmine and Siesta.  So far, I have the following:

      var Ext: IExt;
      interface IExt {
          application(config: Ext_app_Application);
          Window: Ext_WindowManager;
          create(name: string, viewport: Ext_container_Viewport);
          getVersion();
          get (name: string): Ext_dom_AbstractElement;
          removeNode(name: Ext_dom_AbstractElement);
          DomHelper: Ext_DomHelper;
          getBody(): Ext_dom_AbstractElement;
      
          define(name: string, controller: Ext_app_Controller);
          define(name: string, controller: Ext_app_Application);
      
          ComponentManager: Ext_ComponentManager;
          require(name: string);
          onReady(call: Function);
      }

      Note that there are two define() functions, each with a different controller cast – as TypeScript will allow for function overloading.

      ExtJs and TypeScript GOTCHA’s

      Scope of this in config blocks and closures.

      TypeScript uses the closure and module patterns extensively for it’s generated javascript.   One of the major advantages of these patterns is to correctly control scope, particularly for the ubiquitous this keyword.

      In the following TypeScript code, init() will NEVER be called by ExtJs !

      Note that init is a function returning this.control ( { … } ), and is using standard TypeScript syntax for specifying init as a function – the  () => { } syntax.

      Ext.define('SampleApp.controller.FaultyController', <Ext_app_Controller>{
          extend: 'Ext.app.Controller',
          init: () => { 
              this.control({});
          },
      });
      

      The compiled code looks like this:

      var _this = this;
      Ext.define('SampleApp.controller.FaultyController', {
          extend: 'Ext.app.Controller',
          init: function () {
              _this.control({
              });
          }
      });

      Note the var _this = this; line at the top of the code, and how the init: function() will call _this.control – here we have an example of how TypeScript is using closures to ensure that we are scoping this correctly.

      The solution : use anonymous functions

      To resolve this issue, we will need to use standard javascript syntax for declaring anonymous functions inside our configuration block as follows:

      Ext.define('SampleApp.controller.WorkingController', <Ext_app_Controller>{
          extend: 'Ext.app.Controller',
          init: function () {
              this.control({});
          }
      });

      Note the very subtle difference between init: () => {} syntax, and init: function() { } syntax.

      The compiled version of the above code now works correctly with ExtJs:

      Ext.define('SampleApp.controller.WorkingController', {
          extend: 'Ext.app.Controller',
          init: function () {
              this.control({
              });
          }
      });
      

      Unit Testing

      The sample source code that is attached to this blog entry contains two test runners, one build for Jasmine, and another built for Siesta.

      As my unit-testing tool of choice is Jasmine, I have put together more unit tests for Jasmine than for Siesta, but am also compiling some Siesta tests through TypeScript, just to show how to use Siesta.

      One and only one Application

      Each ExtJs solution can have one and only one Application defined, and jasmine tests need to be launched during global Application initialization – specifically during the launch function as follows:

      Ext.onReady(() => {
          Ext.create('Ext.app.Application', <Ext_app_Application> {
              name: 'TestAppBootStrapper',
              appFolder: '../app/sampleapp',
      launch: () => {
      
                  jasmine.getEnv().addReporter(new jasmine.HtmlReporter());
                  jasmine.getEnv().execute();
                  return true;
              }
          });
      });

      Unfortunately, this presents some problems in test coverage, as initialization routines in either the main Application, or initialization for Controllers cannot be tested – or at least I have not found a way to do so.

      Consider the following test:

          it('has called init on SampleController', () => {
              expect(SampleApp.getController('SampleController')).toBeDefined();
      
              // cannot spy on init function, as it is called before the tests start
              var spyOnInit = spyOn(SampleApp, 'init');
              expect(spyOnInit).toHaveBeenCalled(); // this will always fail
      
          });
      

      This test will ALWAYS fail – as the init method of the SampleController ( the default controller for our Application ) is called BEFORE we have a chance to set a spy on the method.

      In Conclusion : ExtJs objects vs TypeScript objects

      Unfortunately, TypeScript and ExtJs do not seem to work too well together.  This incompatibility is mainly due to the differences in object creation between the two approaches.

      Where ExtJs uses config blocks and anonymous methods for object creation, TypeScript uses the closure pattern to bring an easier way to build object-oriented javascript.  Unfortunately these two approaches seem to be at odds with each other.

      Consider ExtJs’s method of object creation:

      Ext.create(
          'Ext.container.Viewport', // object name
          <Ext_container_Viewport>{ // config block
              launch: function () { // anonymous method
              }
      });

      Each object is created with an object name ( global namespace ), followed by a configuration block and anonymous methods.  Having to statically cast each config block to the required type is a work-around to get Intellisense and type-checking into ExtJs code.

      If the ExtJs libraries were written in a more TypeScript friendly manner, then we would be able to code like this:

      // possible implementation of Ext.container.Viewport
      class ExtContainerViewport {
          constructor(objectName: string) {
          }
          launch() {
          }
      }
      
      // extending an ExtContainerViewport is now more object-oriented.
      class MyViewPort extends ExtContainerViewport {
          constructor() {
              super('MyViewPort');
          }
          launch() {
              super.launch();
          }
      }
      

      Have fun,

      Blorkfish

      Source Code Download

      The full source code for this blog can be found here.  Note that in order to run the application, you will need the latest version of ExtJs which can be downloaded from here, as well as the latest version of siesta, which can be downloaded from here.

      TypeScript: Organizing your code with AMD modules and require.js

      TypeScript has two methods of generating JavaScript output files: CommonJS, and AMD.  CommonJS is the default, and amd modules can be generated by adding the –module AMD option to the compiler flags.

      In this post, I’ll show how to create and use AMD modules, as well as configuring require.js by using require.config.

      Update

      This article has been updated to use TypeScript 0.9.  You can download / browse the source at github/blorkfish/typescript-amd-require-0.9

      The older 0.8.1 source for this solution can be found here.

      The older 0.8.0 source for this solution can be found here

      Mastering TypeScript Book : available April 2015

      Over the past couple of months, I have been working very closely with the publishing team at PAKT Publishing on a new book called “Mastering TypeScript”.  It is scheduled for publication in April 2015.  You can read all about it here:https://www.packtpub.com/web-development/mastering-typescript.

      B03967_MockupCover_Normal

       

      Creating a default project using CommonJS

      Let’s start with a standard new TypeScript project – which by default creates an app.ts file, and a default.htm – and  add the following:

      • \app directory (for application files)
      • \app\classes (for our AMD classes)
      • \lib directory (for external libraries)
      • \modules (for our module definitions)
      • \app\AppMain.ts  ( note that you should remove any code that the compiler generates in this file)
      • \app\AppConfig.ts ( remove any code )
      • \app\classes\Greeter.ts ( remove any code )
      • download require.js and include it in the \lib directory. ( require.js can be found here release 2.1.8 )
      • download require.d.ts from DefinitelyTyped, and save it in the modules directory.

      require_amd_1

      \modules\Require.d.ts

      As at the time of update, this is at version 2.1.1.

      \app\classes\Greeter.ts as an AMD module

      Cut the code defining the Greeter class from \app.ts into the \app\classes\Greeter.ts file:

      Effectively, we are now starting to organise our project, with one .ts file for each class.

      app\classes\Greeter.ts:
      class Greeter {
          element: HTMLElement;
          span: HTMLElement;
          timerToken: number;
      
          constructor (element: HTMLElement) { 
              this.element = element;
              this.element.innerText += "The time is: ";
              this.span = document.createElement('span');
              this.element.appendChild(this.span);
              this.span.innerText = new Date().toUTCString();
          }
      
          start() {
              this.timerToken = setInterval(() => this.span.innerText = new Date().toUTCString(), 500);
          }
      
          stop() {
              clearTimeout(this.timerToken);
          }
      
      }

      Compiling the project now should show the error: Could not find symbo ‘Greeter’.

      Let’s fix this first by using a CommonJS reference – add a reference path to app.ts:

      app.ts
      /// <reference path="app/classes/Greeter.ts" />
      
      window.onload = () => {
          var el = document.getElementById('content');
          var greeter = new Greeter(el);
          greeter.start();
      };

      The project should now compile.

      If you run the project now, (using Internet Explorer), the Greeter.js file will be unreferenced:

      0x800a1391 – JavaScript runtime error: ‘Greeter’ is undefined

      The simple solution is to include this new Greeter.js file in default.htm:

      default.htm
      <head>
          <meta charset="utf-8" />
          <title>TypeScript HTML App</title>
          <link rel="stylesheet" href="app.css" type="text/css" />
          <script type="text/javascript" src="app/classes/Greeter.js"></script>
          <script src="app.js"></script>
      </head>

      Running the project now will succeed:

      typescript_amd_1

      Converting Greeter.ts to an AMD module

      TypeScript 0.9 and upwards will default to compile all source files as AMD compliant.  This is slightly different to 0.8 versions, where by default projects were compiled to commonJS.  For reference purposes, the following section shows how to use AMD in 0.8 versions.  If using TypeScript 0.9, please continue to the next section, Export Greeter.

      Specifying AMD compilation for 0.8 and 0.8.1 versions of TypeScript:

      To compile project files to AMD modules, unload your project file, edit it, and add the –module AMD option to the command line options:

      0.8.1

      :  Here is the 0.8.1 version of the project file:

      Note that you will need to remove the –sourcemap option for Debug configuration, as sourcemap and AMD do not work well together.

        <PropertyGroup Condition="'$(Configuration)' == 'Debug'">
          <!--remove the --sourcemap option below-->
          <TypeScriptSourceMap></TypeScriptSourceMap>
        </PropertyGroup>
        <Target Name="BeforeBuild">
          <Message Text="Compiling TypeScript files" />
          <Message Text="Executing tsc$(TypeScriptSourceMap) @(TypeScriptCompile ->'&quot;%(fullpath)&quot;', ' ')" />
          <Exec Command="tsc$(TypeScriptSourceMap) --module AMD @(TypeScriptCompile ->'&quot;%(fullpath)&quot;', ' ')" />
        </Target>
      

      Older version 0.8.0 compiler version:

        <Target Name="BeforeBuild">
          <Exec Command="&quot;$(PROGRAMFILES)\Microsoft SDKs\TypeScript.8.0.0\tsc&quot; --module AMD @(TypeScriptCompile ->'&quot;%(fullpath)&quot;', ' ')" />
        </Target>

      Export Greeter

      Before we change app\classes\Greeter.ts to an AMD module, have a look at the generated javascript source :

      var Greeter = (function () {
          function Greeter(element) {
              this.element = element;
              this.element.innerHTML += "The time is: ";
              this.span = document.createElement('span');
              this.element.appendChild(this.span);
              this.span.innerText = new Date().toUTCString();
          }
          Greeter.prototype.start = function () {
              var _this = this;
              this.timerToken = setInterval(function () {
                  return _this.span.innerHTML = new Date().toUTCString();
              }, 500);
          };
      
          Greeter.prototype.stop = function () {
              clearTimeout(this.timerToken);
          };
          return Greeter;
      })();
      //@ sourceMappingURL=Greeter.js.map

      Now modify the Greeter class definition, and add the export keyword:

      app/classes/Greeter.ts
      export class Greeter {

      AMD compliant javascript source.

      After compiling, Note the changes to the javascript source – the entire code block has been wrapped in a define( […] ) block, and there is an extra exports.Greeter = Greeter; line at the bottom of the file:

      define(["require", "exports"], function(require, exports) {
          var Greeter = (function () {
              function Greeter(element) {
                  this.element = element;
                  this.element.innerHTML += "The time is: ";
                  this.span = document.createElement('span');
                  this.element.appendChild(this.span);
                  this.span.innerText = new Date().toUTCString();
              }
              Greeter.prototype.start = function () {
                  var _this = this;
                  this.timerToken = setInterval(function () {
                      return _this.span.innerHTML = new Date().toUTCString();
                  }, 500);
              };
      
              Greeter.prototype.stop = function () {
                  clearTimeout(this.timerToken);
              };
              return Greeter;
          })();
          exports.Greeter = Greeter;
      });
      //@ sourceMappingURL=Greeter.js.map
      

      Compiling at this stage will generate errors : Could not find symbol ‘Greeter’.

      We now need to modify the app.ts file to import the the module.  Remove the ///reference path line, and add an import statement as below:

      Now use the name of the import ( gt ) to reference gt.Greeter :

      app.ts
      import gt = module("app/classes/Greeter");
      
      window.onload = () => {
          var el = document.getElementById('content');
          var greeter = new gt.Greeter(el);
          greeter.start();
      };

      Running the app at this stage now will produce the following error:

      Unhandled exception at line 1, column 1 in http://localhost:8524/app.js

      0x800a1391 – JavaScript runtime error: ‘define’ is undefined

      This error is because define is part of the require.js library, as seen at the end of the require.d.ts file :

      // Ambient declarations for 'require' and 'define'
      declare var require: Require;
      declare var requirejs: Require;
      declare var req: Require;
      declare var define: RequireDefine;

      Configuring require.js

      In order to use AMD modules, we need to tell our page to include require.js.  Looking at the require.js documentation, the way to do this is to include the following in your html page

      default.htm :

      <script data-main="app/AppConfig" type="text/javascript" src="lib/require.js"></script>

      Note that the require.js syntax is to use the data-main property to specify a JavaScript file to load as the initial starting point for the application – in this case : app/AppConfig.js.

      Remove the reference to Greeter.js, and app.js, so that your default.htm file looks like this:

      <!DOCTYPE html>
      
      <html lang="en">
      <head>
          <meta charset="utf-8" />
          <title>TypeScript HTML App</title>
          <link rel="stylesheet" href="app.css" type="text/css" />
          <script data-main="app/AppConfig" type="text/javascript" src="lib/require.js"></script>
      </head>
      <body>
          <h1>TypeScript HTML App</h1>
      
          <div id="content"></div>
      </body>
      </html>

      AppConfig.ts

      Create an app/AppConfig.ts TypeScript file, as follows:

      app/AppConfig.ts
      /// <reference path="../modules/require.d.ts" />
      
      import gt = module("classes/Greeter");
      
      require([], () => {
          // code from window.onload
          var el = document.getElementById('content');
          var greeter = new gt.Greeter(el);
          greeter.start();
      });

      Note that we have moved the application startup code (the window.onload function) into the body of the require function.

      The app should now run using AMD loading.

      Adding further modules

      Should your application require further AMD modules, simply include them in the /lib directory, and specify them in the first array  as follows:

      app/AppConfig.ts
      require(['../lib/jquery-1.7.2','../lib/underscore', '../lib/backbone', '../lib/console'], () => {
          // code from window.onload
          var el = document.getElementById('content');
          var greeter = new gt.Greeter(el);
          greeter.start();
      });

      Note that the paths for require are relevant to the location of the AppConfig.ts file (which is in the app directory).

      Using require.config

      require.js has a number of configuration options that make it so powerful.  Among these is the ability to define dependencies between modules.

      Unfortunately, including a require.config in our app/AppConfig.ts file as shown below will result in a run-time error:

      0x800a01b6 – JavaScript runtime error: Object doesn’t support property or method ‘config’

      app/AppConfig.ts
      /// <reference path="../modules/require.d.ts" />
      
      // the config below will cause a run-time error
      require.config({
          baseUrl: '../'
      });
      
      import gt = module("classes/Greeter");
      
      require(['../lib/jquery-1.7.2','../lib/underscore', '../lib/backbone', '../lib/console'], () => {
          // code from window.onload
          var el = document.getElementById('content');
          var greeter = new gt.Greeter(el);
          greeter.start();
      });
      

      This run-time error is caused because the TypeScript compiler ( with –module AMD ) compile option wraps the entire file in a require statement.  Have a look at the generated code:

      app/AppConfig.js (generated)
      define(["require", "exports", "classes/Greeter"], function (require, exports, __gt__) {
          // this require.config below should NOT be inside the define function
          require.config({
              baseUrl: '../'
          });
          var gt = __gt__;
      
          require([
              '../lib/jquery-1.7.2', 
              '../lib/underscore', 
              '../lib/backbone', 
              '../lib/console'
          ], function () {
              var el = document.getElementById('content');
              var greeter = new gt.Greeter(el);
              greeter.start();
          });
      })

      Using require.config with AMD modules solution:

      The solution here is to separate our require config file from our application main file, and remove any import module statements from the configuration file.  Remember how the generated javascript changed when we added the import statement to app/classes/Greeter.ts ? So make sure that the file with require.config does not have any import statements :

      AppMain.ts

      Create an AppMain.ts file within the app folder as follows:

      app/AppMain.ts
      import gt = module("classes/Greeter");
      
      export class AppMain {
          public run() {
              var el = document.getElementById('content');
              var greeter = new gt.Greeter(el);
              greeter.start();
      
          }
      }

      Modify AppConfig.ts to use named require parameters

      app/AppConfig.ts
      /// <reference path="../modules/require.d.ts" />
      /// <reference path="AppMain.ts" />
      require.config({
          //baseUrl: '../' // commented for now
      });
      
      require(['AppMain', 
          '../lib/jquery-1.7.2','../lib/underscore', '../lib/backbone', '../lib/console' ], 
          (main) => {
          // code from window.onload
          var appMain = new main.AppMain();
          appMain.run();
      });
      

      Note that we have specified to require.js that it must load a file named ‘AppMain’ ( our AppMain.ts compiled file), and that when the require function runs, AppMain’s classes will be referenced by the named parameter main. If we were to name all of the parameters in the require array, our code would look something like this:

      app/AppConfig.ts
      require(['AppMain', 
          '../lib/jquery-1.7.2','../lib/underscore', '../lib/backbone', '../lib/console' ], 
          (main, $, _, console) => {
          var appMain = new main.AppMain();
          appMain.run();
      });

      Using named require parameters and shims

      Fortunately, require.js allows us to configure named parameters via the paths config variable, and then define a shim property for each named path, that includes the export symbol, and any dependencies, as follows:

      app/AppConfig.ts
      require.config({
          baseUrl: '../',
          paths: {
              'jquery': 'lib/jquery-1.7.2',
              'underscore': 'lib/underscore',
              'backbone': 'lib/backbone',
              'console': 'lib/console'
          }, 
          shim: {
              jquery: {
                  exports: '$'
              },
              underscore: {
                  exports: '_'
              },
              backbone: {
                  deps: ["underscore", "jquery"],
                  exports: "Backbone"
              },
              console: {
                  exports: "console"
              }
          }
      });
      
      require([
           'jquery'
          , 'underscore'
          , 'backbone'
          , 'console'
          ], 
          ($, _, Backbone, console) => {
          $(() => {
      
              // code goes here
      
          });
      });
      

      Require.config tips

      Note that for jquery plugins – all jquery extension methods must export to $ as well – this is accomplished by specifying $ as the export for plugins, and jquery as a dependency.  Note too that in the require function definition, TypeScript will not allow multiple parameters with the same name, so to use jquery.flip.js for example, use the following require.config:

      require.config({
          paths: {
              'jquery': 'lib/jquery-1.7.2',
              'jqueryflip': 'lib/jquery-flip'
          },
          shim: {
              jquery: {
                  exports: '$'
              },
              jqueryflip: {
                  deps: ['jquery'],
                  exports: '$'
              }
          }
      });
      
      require(['jquery', 'jqueryflip', 'AppMain'], ($, jqueryflip, main) => {
          var appMain = new main.AppMain();
          appMain.run();
      });

      Have fun,

      – Blorkfish.

      TypeScript: Implementing a Simple IOC Container for Service Location

      Introduction

      Inversion of Control is an object-oriented design pattern that encourages de-coupling of objects, by enforcing a layer of abstraction between object interfaces.

      The purpose of this post is to explain how to implement a very simple IOC container using TypeScript, focussing on a Service Locator.  This IOC container can be used both for service location of TypeScript generated classes, and external JavaScript libraries.

      Update

      I have just setup a github repository for TypScriptTinyIoC : https://github.com/blorkfish/typescript-tiny-ioc

      End Goal

      Our end goal is to be able to use a very simple IOC for service location as follows:

      // registration
      var typeScriptTinyIoC = new TypeScriptTinyIOC();
      TypeScriptTinyIOC.register(new TestImplementsIDrawable(), new IIDrawable());
      
      // service location
      var implementsIDrawable = TypeScriptTinyIOC.resolve(new IIDrawable());
      expect(implementsIDrawable.draw()).toEqual("drawn");

      Reflection (of sorts)

      Unfortunately, JavaScript does not support reflection – which is a pre-requisite for IOC containers.  It does, however allow for querying an object for a specific method.  In their book, “Pro JavaScript Design Patterns”, Ross Harmes and Dustin Diaz explain how:

      Consider the following code:

      class Greeter {
          start() {
          }
      }
      
      class FunctionChecker {
          static implementsFunction(objectToCheck: any, functionName: string): bool {
              return objectToCheck[functionName] != undefined;
          };
      }
      
      window.onload = () => {
          var greeter = new Greeter();
          var el = document.getElementById('content');
          el.innerHTML = 'Does greeter implement start() ' +
              FunctionChecker.implementsFunction(greeter, 'start');
      };
      

      This very simple function checking algorithm will test whether the instance of greeter implements the function start.

      Taking this principle one step further, lets define a TypeScript interface with a class name, and a simple array of method names,

      interface IInterfaceChecker {
          className: string;
          methodNames: string[];
      }
      

      Then we can define a class that implements this InterfaceChecking interface.

      export class IITodoService implements IInterfaceChecker {
          className: string = 'IITodoService';
          methodNames: string[] = ['loadMTodoArray', 'storeMTodoArray'];
      };
      

      Static Reflection

      I guess that you can think of this mechanism as “static reflection”, or “manual reflection”, because we still need to define the list of method names manually.  There are benefits to this approach, though.

      Interface Checking Benefits.

      While this very simple mechanism may seem trivial, it’s beauty is in it’s simplicity.  It is easy to implement, and promotes reusability – because classes will have documented sets of methods, and can easily be swapped out for different classes that implement the same functionality.

      It also provides us a mechanism of determining (at runtime) whether a class implements the desired functionality – and can also be invaluable when your classes depend on external libraries.  Whenever a new version is available, the library can be checked against your list of required functionality.

      TypeScript Interfaces

      While TypeScript provides the mechanism for strict compile-time checking of interfaces, at run-time we are still dealing with plain-old JavaScript, so the interface definitions are compiled away.  For this reason, we will define a real TypeScript interface, as well as an InterfaceChecker interface definition for use in our InterfaceChecker.  This can be easily accomplished through a simple naming standard:

      A simple naming standard I-name and II-name

      For standard TypeScript Interfaces, pre-fix the interface with the letter I (as per C# standards) – and for InterfaceChecker Interface definitions, prefix the class name with a double I :

      This is the standard TypeScript interface for a TodoService:

      interface ITodoService {
          loadMTodoArray() : any [];
          storeMTodoArray(inArray: any[]) : void;
      };

      and the InterfaceChecker class:

      export class IITodoService implements IInterfaceChecker {
          className: string = 'IITodoService';
          methodNames: string[] = ['loadMTodoArray', 'storeMTodoArray'];
      };

      Then a class that implements the ITodoService :

      export class ToDoService implements ITodoService {
          loadMTodoArray(): any [] {
              // load and return an array of objects
              return [{ id: 5},{ id: 6},{ id: 7}];
          };
          storeMTodoArray(inArray: any[]): void {
              // persist here
          };
      }

      Interface Checking

      Our run-time check to ensure that the TodoService class implements ITodoService is then as follows:

      var service = new TodoService();
      InterfaceChecker.ensureImplements(service , new IITodoService()); 
      // above will throw if not implemented

      InterfaceChecker

      The full code for our interface checker is as follows:

      class InterfaceChecker {
          name: string;
          methods: string[];
      
          constructor (object: IInterfaceChecker) {
              this.name = object.className;
              this.methods = [];
              var i, len: number;
              for (i = 0, len = object.methodNames.length; i < len ; i++) {
                  this.methods.push(object.methodNames[i]);
              };
          }
      
          static ensureImplements(object: any, targetInterface: InterfaceChecker) {
              var i, len: number;
              for (i = 0, len = targetInterface.methods.length; i < len; i++) {
                  var method: string = targetInterface.methods[i];
                  if (!object[method] || typeof object[method] !== 'function') {
                      throw new Error("Function InterfaceChecker.ensureImplements: ' + '
                          object does not implement the " + targetInterface.name +
                          " interface. Method " + method + " was not found");
                  }
              }
          };
          static implementsInterface(object: any, targetInterface: InterfaceChecker) {
              var i, len: number;
              for (i = 0, len = targetInterface.methods.length; i < len; i++) {
                  var method: string = targetInterface.methods[i];
                  if (!object[method] || typeof object[method] !== 'function') {
                      return false;
                  }
              }
              return true;
          };
      }

      Once we have the mechanics of our InterfaceChecker in place, it is very simple to implement an IOC container for service Location:

      TypeScriptTinyIOC

      class TypeScriptTinyIOC {
      
          static registeredClasses: any[] = [];
      
          static register(targetObject: any, interfaceType: IInterfaceChecker) {
              var interfaceToImplement = new InterfaceChecker(interfaceType);
      
              InterfaceChecker.ensureImplements(targetObject, interfaceToImplement); 
              // will throw if not implemented
              if (InterfaceChecker.implementsInterface(targetObject, 
                  interfaceToImplement)) 
              {
                  this.registeredClasses[interfaceType.className] = targetObject;
              }
          }
      
          static resolve(interfaceType: IInterfaceChecker): any {
              var resolvedInterface = this.registeredClasses[interfaceType.className];
              if (resolvedInterface == undefined)
                  throw new Error("Cannot find registered class that implements " 
                      + " interface: " + interfaceType.className);
              return resolvedInterface;
          }
      
      };

      TypeScriptTinyIOC usage:

      The very simple IOC container can then be used as follows:

      interface IDrawable {
          centerOnPoint();
          zoom();
          draw(): string;
      }
      
      class IIDrawable implements IInterfaceChecker {
          className: string = 'IIDrawable';
          methodNames: string[] = ['centerOnPoint', 'zoom', 'draw'];
      }
      
      class TestImplementsIDrawable implements IDrawable {
          centerOnPoint() {
          };
          zoom() {
          };
          draw() : string {
              return 'drawn';
          };
      }
      
      // registration
      var typeScriptTinyIoC = new TypeScriptTinyIOC();
      TypeScriptTinyIOC.register(new TestImplementsIDrawable(), new IIDrawable());
      
      // service location
      var implementsIDrawable = TypeScriptTinyIOC.resolve(new IIDrawable());
      expect(implementsIDrawable.draw()).toEqual("drawn");
      
      

      Have fun,

      – Blorkfish.

      In my next blog post, I will be tackling TypeScript AMD modules – understanding how to create and use them, how they help with code organisation, and how to mix standard TypeScript classes with AMD modules.