Proxy Fiddler Through Burp

by Bill Sempf 4. April 2018 15:03

I am testing an application that only works on Internet Explorer in compatibility mode.  Before you laugh, it's is EXACTLY these legacy applications that get us into trouble, and they should be tested regularly, and they can be secured using compensating controls.  However, I am on the client's computer, which has enterprise controls on the proxy, which means I can't easily configure IE to use Burp because it uses the system proxy.

Fiddler, however, traps WinINET so it will see the traffic from IE, even with the proxy set to the corporate settings.  Fiddler is only an average-at-best security testing tool though, so I would like to use Burp too. The solution is to chain the proxies, and all of the instructions I am reading online are out of date. Because of this I thought I would add to the corpus because it is quite simple these days.

First it is important to know that Burp Suite listens on localhost, port 8080.  This is what you need to set your browser to in order to have the requests and responses filtered through Burp. We can leave these settings as default.

Fiddler's proxy is localhost, 8888, but that doesn't matter on Windows.  Since it listens on the network channel, we don't have to do anything - Fiddler "Just Works (tm)." You can leave these settings default as well.

The "Gateway" tab in the Options dialog has settings to proxy Fiddler outbound.  It will probably be set to System settings, as it should, but we are going to change that for this exercise.  Just like you would normally do in Chrome, set the proxy to manual, and set the values to localhost, 8080.  (Remember 127.0.0.1 is localhost)

That's it! Now every request and response will go through Fiddler and Burp.  Note that some of your enterprise applications might notice the proxy change and stop working, but at least you can get through your test.  Happy hacking!

Tags:

AppSec | Enterprise Architecture

Why you do vulnerability assessments on internal sites

by Bill Sempf 21. March 2014 05:15

As both a software architect and a vulnerability assessor, I am often asked why bother to test applications that are inside the firewall. 

It's a pretty valid question, and one that I asked a lot when working in the enterprise space. To the casual observer, network access seems to be an insurmountable hurdle to getting to an application. For years, I argued against even using a login on internal sites, to improve usability. That perspective changed once I started learning about security in the 90s, but I still didn't give applications that I knew would be internal to the firewall due rigor until I started testing around 2002.

This all comes down to the basic security concept of Security In Depth. Yes, I know it is a buzzword (buzzphrase?) but the concept is sound - layers of security will help cover you when a mistake is made. Fact is, there are a fair number of reasons to make sure internal apps meet the same rigor as external apps. I have listed a few below. If you can think of any more, list them in the comments below.

The network is not a barrier

Protecting the network is hard. Just like application vulnerabilities are hard to glean out, network vulnerabilities are hard to keep up with. Unlike application vulnerability management, handling vulnerabilities is less about ticket management and more about vendor management.

A lot of attacks on companies are through the network. Aside from flaws in devices and software, we have social attacks too.

Password Audit Department

Fact is, the network layer isn't a guarantee against access. It is very good, but not perfect. If there is a breach, then the attackers will take advantage of whatever they find. Now think about that: once I have an IP address, I am going to look for a server to take over. Just like if I am on the Internet: finding a server to own is the goal. Once I am inside your network, the goal stays the same.

People who shouldn't have access often do

You probably heard about the Target breach. If not, read up. The whole thing was caused by a vendor with evisting VPN access getting breached, and then that VPN access being used to own the Point Of Sale systems. Here's a question for you:

How did a HVAC vendor have access to the POSs?

It's possible to give very specific access to users. It's just hard. Not technically hard, just demanding. Every time something in the network changes, you have to change the model. Because there are a limited number of hours in the day, we let things go. After we have let a certain number of things go, the authentication system becomes a little more like a free for all.

Most vendors have a simple authentication model - you are in or you are out. Once you have passed the requirements for being 'in' you have VPN access and you are inside the firewall. After that, if you want to see what your ex-girlfriend's boyfriend is up to, then it is up to you. The network isn't going to stop you.

You can't trust people inside the network

In the same vein, even employees can't totally be trusted. This gets into the social and psychological sides of this business where I have no business playing, but there is no question that the people that work for you have a vested interest in the data that is stored. Be it HR data or product information, there are a number of factors that could persuade your established users to have a let us say 'gathering interest.' I know it is hard to hear - it is hard for me to write. Fact is, the people that work for you need to be treated with some caution. Not like the enemy, mind you, but certainly with reasonable caution. 

Applications are often moved into the DMZ

From the developer's perspective, frankly this is the biggest issue. Applications, particularly web applications, are often exposed after time. A partner needs it, the customers need it, some vendor needs it, we have been bought, we bought someone, whatever. Setting up federated identity usually doesn't move at the speed of business, and middle managers will just say 'put it in the DMZ.'

This happens a LOT with web services. Around 2004 everyone rewrote their middle tier to be SOAP in order to handle the requests of the front end devs, who were trying to keep up with the times. Around 2011, when the services were old and worn and everyone was used to them servicing the web server under the covers, the iPhone came out.

Then you needed The App. you know that meeting, after the CIO had played with her niece's iPhone at Memorial Day, and prodded the CEO, and he decided The App must be done. But the logic for the app was in the services, and the CIO said 'that's why we made services! Just make them available to the app!

But. Were they tested? Really? Same rigor as your public web? I bet not. Take a second look.

Just test everything 

Moral of the story is: just test everything. Any application is a new attack surface, with risk associated. If you are a dev, or in QA, or certainly in security, just assume that every application needs to be tested. It's the best overall strategy.

Tags:

AppSec | Enterprise Architecture

Dual boot Windows 7 and Windows 8

by Bill Sempf 14. September 2011 17:38

 

When Windows 8 released, I was waiting.  I am not usually the first in line for OS releases, but this time I had a vested interest. I have a book in the works, and this release was an important part.

I was ready with VirtualBox, Windows Virtual PC, and a spare laptop (in case I needed to install on the metal). When the ISOs were available, I was first in line, with a fast connection, and I did the Pokemon bit – gotta get ‘em all.

VirtualBox refused to honor the 64 bit virtualization of my HP XW6200. Aaaand, so did Virtual PC. And guess what – the spare laptop I had was 32 bit too. I was stuck.

Except I had my main laptop, which was 64 bit and had Grub and a Linux partition.  Maybe, just maybe I could instead turn it into a dual booting Windows 7 / Windows 7 laptop.  This post is about how I did it.

Getting rid of Linux

The first time I had to do was get rid of Linux. I did this by removing the partitions that it lived on (That Wubu had made for me) and making them into an empty partition. I did this with the Disk Manager.

In Windows XP and prior, disk partitioning required a tool purchase. In Vista, Microsoft included a tool called Disk Management, part of the Computer Management control panel. To get there, open the Control Panel, change to Icon View, click Administrative Tools, and open the Computer Management panel.

image

This image shows my desktop right now, but the laptop had 6 partitions

  • The original Vista recovery partition
  • The C partition
  • Grub
  • Ubuntu 10
  • Ubuntu11
  • System Reserved

So I deleted the two Ubuntu partitions and Grub and made them into an empty partition. I made one on my desktop to show what I mean.

image

Here, I have a 250 GB unallocated partition.  I can right click on it and name it so I can be sure to get the right one when I am installing Windows 8.

Making a Windows 8 boot UBS drive

Now I needed to install – and I didn’t have any blank DVDs. (Really) I did have a 75 gig USB drive though. My laptop had the capability to boot from USB (as many do) so I decided to make a bootable USB drive.

  1. Start with a drive that you can empty (You can add stuff later if you need to).
  2. Extract the ISO to a file directory on your hard drive. Use WinRAR if you have nothing to do that.
  3. Download NovaCorp’s WinToFlash product.
  4. It runs right from the download no need to install
  5. Use the Windows Setup Transfer Wizard to move the extracted files from the ISO to the USB
  6. There ya go!

Making a Windows Partition

So I rebooted after this activity, and I got a Grub error. As it turns out, Grub doesn’t LIKE it when you delete its master boot record. I needed to load up a repair utility. Since I had a Windows 8 boot drive now, I booted to it by setting my BIOS to boot from USB, and entered the Repair menu.

From there I went to Advanced Tools, got a command prompt, and entered two commands:

bootsect /nt60 C:

and then

bootrec /fixmbr

Rebooted and then Windows 7 booted just fine.

Installing Windows 8

Installing Windows 8 was an awesome experience. I shut down Windows 7 and changed the boot partition to the USB drive again. When it came up, I had a normal Windows 8 install experience, which took about 10 minutes.

The installer auto rebooted, and I still had the bootable USB drive in. Since my BIOD was set to boot from that drive, it went back to the installer startup. I just shut down my machine, and unplugged the USB drive, restarted and the installer continued.

After installation, I rebooted to discover that Windows 8 comes with a boot manager! I get a big, neat MetroUI selection screen asking my if I want to boot into Windows 7 or Windows 8.

The Finished Product

I was stuck in 1024/786, but I had a landscape display. In a last ditch attempt, I navigated to the Display Settings, selected Advanced and then Update Driver.

image

Here I tried the Search Automatically feature and what do you know, it worked.

Now I have a perfectly working Windows 8 and Windows 7 partition, and I can freely boot between the two. The Windows 8 partition even has my Windows 7 partition mounted as a drive!

Nice work, Microsoft. Your work really showed on this one.

Tags:

Enterprise Architecture | Biz

The Case for Modeling Part 2

by Bill Sempf 13. April 2011 19:06

The regular readers of this blog know that my database modeling book was cancelled when Microsoft pulled the rug out from under M and the repository. I did a lot of good writing about modeling in general, so I wanted to put some of it up here on the blog, since the cancellation is official now. This is the second part of Chapter 1.

Why modeling?

Much software is designed on napkins. A story, a mockup, a model is really all that you need to design how a piece of software works.

This book isn’t about user stories or screen mockups. It is about models, and modeling models. It is one part of the trifecta of software design, and it is a very important part.

What is a model? When you say model to my son, he thinks Legos. That’s not far from the truth. A model is the whole of the component parts that make up software. In the example in Chapter one, the model consisted of trucks, boxes, and the relationship between the two.

The further you get into the so called ‘enterprise’ world (meaning really really big software) the more having a consistent model makes it possible to design intelligent software. This is because a good model brings consistent terminology, an abstraction of ideas and an understandable message to the party.

Creation of a consistent terminology

Working on a model makes you decide what to call things in your domain. In the example in Chapter 1, Boxes could be called Containers. They aren’t. They are called Boxes.

Purists will say that it isn’t just the act of deciding what to call things that matters, but instead the decisions that are made. I disagree on matters of principle here. The simple act of deciding to call something a box and not a container is important. Discussion on what a container is matters. Later on, might you have a bucket? Should boxes and buckets both be containers?

This conversation is frustrating but important. This is the selection of nouns in your new language, and nouns matter. They participate in a common language for your business, both in everyday discussion and in software development

Language and software design

They are called programming ‘languages’ for a reason. Spoken languages have nouns and verbs, as do programming languages. The difference is that precision matters a lot more in programming languages, so the ‘dictionary’ of nouns needs to be exactly accurate.

In programming languages, you define your nouns and verbs. Nouns are class instances, and verbs are methods. In a language like Visual Basic, you define the members of the language as you go:

Public Class Box

Public String Destination

End Class

The system has a concept of a box, and the box has a concept of a destination. We didn’t call the box a container, we called it a box. The box has a destination. This is where the box is going. We didn’t call it an endpoint. An endpoint might mean something else.

Verbs work the same way. You might be able to route a box, for instance.

Public Class Box

Public Destination As String

Public Function Route() As Boolean

Return True

End Function

End Class

Now the Box knows a verb – you can route a box. It is an action word – something you can do. We are literally building a new language here – the hip term is a Domain Specific Language, or DSL. It is not a general language. It is specific to our needs.

There is a problem here, though. The Visual Basic code is something that is only used by the developers. Left to their own devices, programmers will come up with the language to describe your business all on their own, and then it won’t be a common language, used by the business and the IT staff.

The key is in a common language

The reason that domain languages are important to modeling is that the software comes out a lot better if both the business users and the software developers use the same words. I probably don’t even have to mention this, because most all of the fine readers of this book have had that conversation:

“But that process only occurs if the router has that one form.”

“Which form?”

“Oh, I don’t know. That one form with the routing information. “

“Which routing information?”

“I’m not sure. It’s a little different every time.”

A common language mitigates this conversation, because they know to say “destination” and not “that information that is needed for the route.” Things have a name. Those names reduce confusion, and cause better software development.

Description of ideas in an abstraction

Creation of a model provides an abstract space to work with the ideas in the system. The architect and the business analyst don’t have to depend on concrete examples with exceptions left and right to design the software, they can work in generalities.

Boxes can be routed, for example. The development team doesn’t have to deal with the fact that Box ABC123 went to Cleveland, and Box DEF456 went to Cincinnati. The exceptions in specific cases are important, and need to be modeled in their own right, but they don’t need to be in the general abstraction that is used to talk about the system in at the 1,000 foot view.

Making models understandable

Abstraction is important when talking to users. To define user stories, the business analyst needs to talk to users all along the process tree. Most users don’t have visibility into the whole process. The receiving guy doesn’t know how Box ABS123 got to Cleveland, but he probably understands that the box was routed. Even if he doesn’t, this idea can be explained, because it is understandable.

Development teams get entrenched in detail. This detail is a necessary as the software gets developed. With an understandable model, though, there is always somewhere to go when you need to return to ‘home base’ and cover the big picture again.

Images versus text

Usually, architects develop diagrams to show models, like Figure 1-4. This is fine, but it gets to be far to complex after the details is added. Few diagrams can be easily searched, organized or simplified. Since we are looking for an abstraction, simplicity is king.

clip_image002
Figure 1-4: A model diagram

An option for a model is to eschew the boxes and use text. Models developed in text can be more easily simplified, and are much more searchable.

Text has a lot of benefits when you are creating an abstraction. First, the viewer doesn’t get tied up by the lines. Often, relationships between items (represented by the lines) are far too complex to make a readable diagram. If you have a simple list of the items that are related, you can just read them rather than tracing them.

The other benefit of using text for the abstraction is that there are fewer blocks to broadening the abstraction. When you realize that the box actually has a relationship with a product, you can just add it, rather than finding the product box on the other side of the page and finding a place for the line.

The conversation between the architect and the developer

The third consideration of creating an abstraction is the conversation amongst the development team members. The problem domain is rarely known when a group of developers get together to solve a particular business problem. Usually a new feature is just that – new – and the terminology being used is foreign to all but the business people directly involved with the process.

Use of a model mitigates this issue by firming up the terminology right from the start. If all of the entities in the business domain are defined and named, the conversation can revolve around the abstraction rather than examples. Even if the business unit itself uses mixed terminology for parts of the business domain, the model will sort out the communication between the architect and the developers.

This conversation is rarely called out, but it a very important conversation indeed. In small and large projects, the big picture is usually in the architects head rather than written down. The architect tries to pass the feature specific information to the developer, shielding them from the big picture. The developer tries to build a focused feature without insight into the overall system. Hilarity ensues.

The construction of a well-understood model provides an abstraction that the architects can use to give the developers context. Context is very important in system development. It doesn’t get in the way of the detailed development work that the programmers are completing, but it does lead to the discovery of contextual errors. Developers can find problems that relate to the system as a whole and apply the fix holistically, rather than just locally in their own feature.

Forming an understandable message

Creating a model allows for communication to be constructed that actually makes sense – especially as it relates to change. In order to communicate to the users and developers about a system, it is necessary to clearly describe the focus points of the system. To clearly describe anything about the system, it is necessary to start with a model.

To some extent, this pales with the need to provide the next guy with a path. Documentation of systems at the design level is notoriously bad. It is never up to date, practically never complete. It is confusing and passes on no knowledge of the business domain. It is useless. Software modeling – done correctly – mitigates this considerably.

Communicating to the user

The user is the person for whom the software is written. It is important that the development team can communicate on a few different levels. First, the user needs to be able to pass on business functionality to the architect. Second, the development team needs to confirm functionality after consuming it into the rest of the system.

During the initial communication about the system, the first thing that needs to be completed is a software model. The entities in the model will provide a base level of communication about the rest of the software. As functionality is discussed, there will be no need to focus on specifics because the model is specifically understood, not just generally assumed.

As important is the review of software designs. When user stories and mockups are reviewed with the user prior to commencing development, there is an understood terminology behind the discussion.

Communicating to the developer

The conversation between the architect and the developer was discussed above, but it bears mention again in this context. Creation of a context for the developer to create features makes a huge difference.

Context isn’t the only issue, however. Users communicate with developers too - especially in agile environments. Testers communicate with developers. Managers talk to everyone. Wouldn’t it be nice if everyone could just use the same terminology? I certainly think so.

The model brings an accurate terminology to the whole team, especially when communicating with the developer. As with the other examples in this section, the terminology reduces errors and the time it takes to communicate ideas.

Communicating to the future

Probably the largest afterthought in system documentation is the next guy. After all, the development staff isn’t being paid to make version two easier, are they? They are being paid to write version one. Nonetheless, having to edit a system is a necessity. Either the technology will move on and the update wizard will come calling, or the business rules will change.

The best way to improve on systems documentation is to remove the problem of updating it on a regular basis. A quality software model will assist with that because it will update as the code is updated. Working from a model removes some of the need for comprehensive documentation.

The business case

It is tough enough to explain to the CIO why you need to upgrade to TFS 2010. Describing just how important it is to totally change the development methodology of the organization even few years is really a problem.

In Software Language Engineering, Anneke Kleppe points out that we need to upgrade our development methodologies now because it’s the shiny new thing, but because we are doing more with less. Dijkstra (1972) called it the Software Crisis, and it is getting worse.

Taming complexity

Year after year, software developers are asked to tackly more complexity. Kleppe describes an environment where it used to be enough to have ‘Hello World’ show up on the screen. With the advent of GUI, consider all of the technologies that have to be mastered to get it on the screen - CSS, windowing, threading perhaps, markup, the list goes on.

What’s more the additional computing power has led to additional expectations of users. Now you are expected to deliver identity and membership, with ‘Hello Mr. Sempf’. Or even delivering more, localization and personalization, with ‘Good evening Mr. Sempf.’

Modeling software tames complexity. Part of the growing complexity of software is that no one person knows the whole system. It is a common story: features need to be added, but the UI guy is on vacation and now there are 24 business rules in the RDBMS.

Accurate models make for understandable software. While nothing makes software easy, not having an accurate model will certainly make it harder to understand.

Enhancing communication

From the project management perspective, nothing makes life better than communication. Weather agile or waterfall, trust and transparency are key. Both of these characteristics require communication. Communication is hard when half of the people at the table calls a container of items a ‘box’ and half of them call it a ‘carton.’

Modeling software enhances communication. Even if all that is done is the simple act of validating the common language it will make talking about the project a lot easier. I hope that you would take it even further than that.

As a project grows, features are added. To build a system of language that assures that the concepts in the application are referred to the same way throughout time. This is essential to good communication, and ease of updating.

With real software modeling, and a metadata implementation, the model follows the software. This means that throughout the lifecycle, the semantics are an intrinsic part of development through the domain specific languages provided in the model.

Tags:

Biz | Enterprise Architecture | M

A week of neat security stuff

by Bill Sempf 13. February 2011 17:30


This week, I’ll be doing three neat security events, and you are invited!

Wednesday morning, I’ll be speaking at the Central Ohio ISSA about Windows Identity Foundation, OpenID and Claims Based Authentication. Details are here. This is the topic description:

“Escalation of privilege is based on a model of security that is driven by roles and groups for a given application. I am in the Administrator role, the Accounting group contains your username. What if instead you carried a token with a verifiable set of claims about your identity? One that is encrypted, requires no round trip to an authorization server, and can be coded against in a native API? Would that bring more security to our government and medical applications? Or is it just as full of holes as everything else? Join Bill in checking out Claims Based Security via Windows Identity Foundation, and see if it fixes problems or is the problem.”

That evening (wshew!) I’ll be giving a presentation on high-security locks at the Columbus Locksport International meeting at the Columbus Idea Foundry.  You can sign up here. Please RSVP if you are coming, because we need to plan for a crowd if we have one.  I’ll be covering security pins, and the idea behind sidebar locks.

Then, Friday, I’ll be at B-Sides Cleveland giving the WIF talk again.  It’s at the House of Blues, and I’ll be talking at 10AM.  The conference is sold out, though.  Too bad - it sounds like an awesome lineup, and I am just floored to be among them. Freaking ReL1K is speaking – he built the Social Engineer’s Toolkit for crying out loud. I’m truly honored.  I am looking forward to this.

Tags:

Biz | Enterprise Architecture | Locksport | AppSec

CodeMash v2.0.1.1

by Bill Sempf 16. January 2011 06:05

 

Another CodeMash is in the books, and all kinds of new stuff was in the offing for me this year.  But first I would be remiss if I didn’t thank Jason Gilmore, Brian Prince and especially Jim Holmes (along with the rest of the board) for uncompromising management of simply the best conference on this topic.  Period. Not for the money, not for the constraints of space.  It is simply the best code-centric conference on the planet.

I owe a lot of people a lot of links and information on a lot of topics.

imageFirst and foremost, I was delighted to be asked to speak again, and was pleased to have Matthew Groves join me for a discussion on Monodroid.  We had 100 people join us for a look at how Monodroid came to be and what the future holds.

Then Matt took us for a tour of his excellent Stock Tracker application (shown left), converted from Windows Mobile.  There were a number of good points made all around, and generally a good time was had by all.

The Monodroid documentation contains nearly everything that you need to know to get programming.  The tutorials are the best starting point, and provide the templates for all of the major use cases. Matt’s application is on GitHub – please feel free to get it an mess around.  It’s a good app.  I’ll have BabyTrak up here in a couple of months.

imageThe Locksport openspace was a rousing success.  About 40 people were taught to pick, and about that many more stopped me in the halls and told me that they would like to have been there.  I was frankly astonished by the turnout, and would have brought 5 times as many picks if I would have known about the interest – all 15 of the sets I brought were sold.

For those looking for more information:

The Locksport International site has a lot of good links to publications and whatnot.  Deviant Ollem’s book, Practical Lock Picking, is excellent – he is the guy who wrote the presentation that I gave (twice). The best community is online at Lockpicking101, and they have an IRC channel too.  If you need to order picks, look at LockPickShop – Red does an awesome job.  The 14 piece set is on sale right now and is a great learners set!

imageFinally, if you are in the Columbus area please join us at the Columbus branch of Locksport International.  We have a Meetup group – just go sign up and you’ll get the locations for each meeting.  You can attend for free, but if you want a membership card and to participate in competitions, it’s $20 a year.

And last but not least, I got a ton of comments on the jam band.  Lots of questions too.  Yes, I was a professional musician for many years.  I taught at a lot of area band camps, like Upper Arlington and Teays Valley.  I played in a Dixieland band in London Ohio called the Lower London Street Dixieland Jazz Band and Chamber Music Society for nearly ten years. I haven’t played in quite a while, and I have to say it was a lot of fun.  Hope to do it again next year.

All in all, an awesome conference.  Again, I was a net producer of content rather than a consumer of content, and that’s OK.  I still learned a ton just by chatting with friends old and new, and picked up information about the hip new technologies that the cool kids are using by osmosis.

Hope to see everyone at DevLink!

Tags:

Biz | Personal | C# | Enterprise Architecture | VB

SQL Server Developer Tools Part 1: Adding an existing project

by Bill Sempf 4. January 2011 00:33

 

I recently was needful of adding some instrumentation to the Dot Net Nuke code base, and decided to use the new SQL Server Developer Tools to load in the database as a project and manage it there.  I had written some internal documentation for the project while it was still under wraps, and was glad to see it so strong when it was released for public consumption.  I thought seeing how I used the application might give someone out there a hand.

Since we have an existing project, we have an existing database. Fortunately, SSDT has an app for that. You can add a new SQL Server Database Project, which will effectively take a snapshot of the database and expose it to Visual Studio as T-SQL Scripts. These are the scripts that will eventually make up the development base for the software

We will start by creating a new project. Right click on the Solution file and select New Project … The Project Selection dialog appears, and if you click Data, you get the template for the SSTP. Name the project and move forward.

clip_image002This project represents everything that is part of a database file in SQL Server. There is a properties folder in the project that will show you all of the database level properties – usually handled by SSMS. This is just one of many examples of SSDT bringing the DBA and developer closer together, as shown in the figure to the left. Operational properties such as the filegroup and transaction details are at least available for viewing by the developer and alterable locally. Permissions still hold, so you as a developer have to be set up to change these kinds of details to change the production system. At least you can alter them locally and see what works without a call to the DBA.

The original project is empty. In order to get the existing database into the project, an import needs to occur. Right click on the new project and then Import Objects and Settings. Select the local database and pass in the appropriate credentials. I selected the DotNetNuke database from my developer’s instance but you should select whatever you want to incorporate.

clip_image004The Import Database Schema Wizard has all of the options that define how you will interact with the database once it is in the project. I like the default settings, which define a folder for each schema, and a folder within for each database item type.

There is an option to import the SQL Server permission structure, but I find that most projects don’t use that. My DotNetNuke project uses the SQL Membership Provider, however, so there is a mapping between the login structure of the database and the Users table of the membership provider. For that reason, I do turn on Import Permissions.

Once the values are set, just follow the steps:

  1. Make a new connection
  2. Click Start
  3. Watch the magic happen
  4. Click Finish
  5. Let’s see what we have

What we have here is everything that the database has to offer, in T-SQL Scripts. This is important. Every change that is made can be included in a DAC, because there is source control and an understood level of alteration. Changes are known by the system, and go into a pot to turn over to operations, or to be reviewed by the DBAs. Changes are not made by altering the database anymore.

clip_image005

Taking a look at the DotNetNuke database project, you’ll see the main schema (dbo) broken into the familiar Tables, Views, sprocs and functions. The project also has scripts for three other database members – Scripts, Security and Storage.

Storage is just the database definition. In the case of this project, it is simply:

ALTER DATABASE [$(DatabaseName)]

ADD FILE (NAME = [DotNetNuke],

FILENAME = '$(DefaultDataPath)

$(DatabaseName).mdf',

SIZE = 9216 KB,

FILEGROWTH = 1024 KB)

TO FILEGROUP [PRIMARY];
It’s part of the completeness, but not something that you will alter a lot. The Scripts directory is another that you’ll change once and forget about – it contains the pre and post build scripts for the deployment of the database. Every time you build and push to the actual database server, these scripts will be run.

The Security folder is fascinating. It contains all of the roles and related schemas, and the associated authorizations. If you have a project that secures database assets in the database management system, this could be awesome.

The meat is in the schema folder, called ‘dbo’ in the DNN example. This is where the scripts you would expect to see in a project like this are held, and where we will be doing the majority of our work. Each entity in the database is scripted separately here, and we can modify or add to our hearts content, and deploy separately.

Set up some new entities

The first ting needed for the instrumentation being added is a table for some timing data. Start by right clicking on the Tables folder and selecting ‘Add New …’. Notice the nice Visual Studio integration that shows asset types which can be added. Select a Table. Name the table ‘Instrumentation’ in the Add New Item dialog and click OK. There is a base template for a table; go ahead and change it for the new fields:

CREATE TABLE [dbo].[Instrumentation]

(

InstrumentationId int NOT NULL IDENTITY (1, 1) PRIMARY KEY,

ClassName varchar(64) NOT NULL,

MemberName varchar(64) NOT NULL,

ElapsedSeconds bigint NULL,

Exception text null

)
There is a Commit button that is so tempting at this point, but it isn’t the button you want right now. Commit peeks at the declarative information in the script – effectively doing a pre-parse – and try and make appropriate changes to the target database. It is more or less like the Table Designer in SSMS at this point, using the references to concepts in the database to make decisions rather than just running the T-SQL as coded.

Click the Execute SQL button in the text window’s toolbar to run the CREATE TABLE procedure and save the table to the database defined in the project properties. (You can read more about that in the Deployment section below.) In this way, the database is effectively disposable. At any time, you can get a copy of the scripts from Team Foundation Server, and generate a whole new copy of the database, minus reference data. For right now, though, we are just adding this one table.

Great, that gives us a place to put the data. Next step is a way to get it there. Right click on the Stored Procedures folder and select Stored Procedure from the context menu. Just like the table above, Visual Studio will give a base template. I changed it to the code below:

CREATE PROCEDURE [dbo].[AddInstrumentationEvent]

@ClassName varchar(64),

@MemberName varchar(64),

@ElapsedSeconds bigint = 0,

@Exception text = ''

AS

INSERT INTO Instrumentation

(

ClassName,

MemberName,

ElapsedSeconds,

Exception

)

VALUES

(

@ClassName,

@MemberName,

@ElapsedSeconds,

@Exception

)

RETURN 0
clip_image007I need to make a quick shout out here for the IntelliSense. It’s expected I suppose, that this should support full IntelliSense integration but I was just shocked at how comfortable to use I personally found it. I do not care to write SQL code because it is so cumbersome. Having that modern development experience talked about in the introduction makes a big difference.

That’s all we got for new features – it’s a short paper after all. Clearly any entity that SQL Server supports can easily be added to the project, stored in source control, and managed like the rest of the application. What’s more, it can be tested and refactored like any other part of the application.

Red, green, refactor

clip_image009After further review, it was decided that Instrumentation wasn’t a good enough name for the table, since it doesn’t accurately represent what was actually put in the rows of the table. Instead, the name InstrumentationEvents is supposed to be used, so we need to rename the table.

clip_image011Right click on the table name in the CREATE statement of Instrumentation.sql and select Refactor -> Rename. Change the name to InstrumentationEvents and click Next. Notice, as shown in the figure to the left, that SSDT got it right. The preview is very helpful. It finds all of the consuming database members, and lets you determine which of them to apply the change to.. Even in the stored procedure, the pluralization is correct, changing AddInstrumentation to AddInstrumentationEvent rather than AddInstrumentationEvents. That trailing s might not seem like much to some people but it can make a big difference in a convention over configuration based system.

Rename isn’t the only refactoring available in SSDT, there are also T-SQL specific features. If you are working in DotNetNuke, open up the GetUsersByUserName.sql script in the Stored Procedures folder. It’s a little overdeveloped and has too much UI logic in it, but it works for this example.

Line 31 has a SELECT * on it, and frankly this procedure is too slow as it is. We don’t need to add a table scan. The refactor menu has an Expand Wildcards option, and I recommend its use here. Right click on the asterisk and select Refactor -> Expand Wildcards, then click on SELECT * in the treeview. The Preview Changes dialog now will show us how the sproc will look with the wildcard expanded, just like the figure to the right. Click Apply to have the changes applied to the procedure.

Don’t overlook the various features of the Visual Studio IDE that now can be used to manage the T-SQL code. For instance, consider that GetUserByUserName.sql file. Right click on the vw_Users token on line 10 and select Go To Definition to be taken to the View definition in the database project. The view doesn’t help us much, because we want to see the table in question. Scroll down in the view to the FROM statement and right click on Users and select Go To Definition again to see the Users table.

As expected, you can then right click on the Users table name and Find All References to see the 44 places that the Users table is used in the database project. The usefulness of this in a legacy environment can’t be overestimated. Finding your way around a project this easily, digging from the core code to the database and navigating using the built-in features will significantly reduce time spent in getting up to speed on existing applications.

clip_image013What’s more, code-level refactoring isn’t the only thing that the data modeling group is pushing for in SSDT. There is project-level refactoring available as well, which is a step in the right direction of whole-project management. Across-the-board changes to software code are something that Visual Studio already excels at, and SSDT is working toward providing the same kinds of features for database projects.

For instance, right click on the database project and select Refactor from that context menu. Aside from the wildcard expansion seen in the code-level refactoring, not the Rename Server/Database reference. It’s a whole-project way to change references that would be manages in configuration files for a C# project, but needs to be controlled with refactoring in T-SQL.

Refactoring is an important part of software development. Though it has been available in third party tools for a while having a standardized experience that is so tightly integrated with Visual Studio will make a big difference to the average developer. While unit testing integration with Visual Studio still isn’t in there for T-SQL it is still a step in the right direction toward that modern development experience we have been talking about.

Tags:

Biz | Enterprise Architecture

I'll be speaking at DogFoodCon

by Bill Sempf 12. October 2010 17:33

I'll be speaking at the 2010 DogFood Conference that Microsoft puts on here in Columbus.  Danilo Castilo runs it (largely) and it is pretty cool - a neat community event on a budget.

It's a cool collection of talks about the newest Microsoft products and how people are using them.  Thus the name: 'DogFood' from the phrase 'eating your own dog food.'

I'll be speaking with Mario Fulan about using AFDS 2.0 to cross domain borders.  If you don't already know Mario, he is a beast - one of like ten Certified Sharepoint Masters in the whole freakin universe or something.  He has forgotten more about SharePoint than I will ever learn.  I do know Windows Identity Foundation a little bit though, so that's what I'll be covering.

The conference is at www.dogfoodcon.com and is selling out really fast.  If you are interested in the hot new stuff, check it out and get registered while you can.  It's next month - November 4 and 5.

Tags:

Biz | C# | Enterprise Architecture

On the death of Quadrant.

by Bill Sempf 3. October 2010 20:44

It's common knowledge that I have been following Oslo / SQL Server Modeling Services very closely.  I am working on a book on the topic, and have posted a number of blog entries.  The speaking circuit has been good to me too, and I have given my Software Modeling With ASCII talk five or six times already this year.

My focus has been on M, but today we are talking about Quadrant.  Quadrant is part of a trio of tools that includes M (a language to build data and domain models) and Modeling Services (a set of common models and repository).  Quadrant itself is tool to interact visually with SQL Server databases.

I've been watching Quadrant for over a year now, and I had a lot of questions about its viability in the marketplace.  As a data management tool, it was underpowered, but as a data browsing tool, it was overpowered.  When I eventually came to realize that it could be a domain model browser, my interest was piqued.  Since you could define the quadrants using M, it would be effectively possible to build comprehensive data management 'dashboards' in Quadrant, and use them in a power user role.

Over time, however, I began to realize that this was an edge case.  The business users and data managers that need these solutions will still find development in M too time-consuming, and the professional developers who would be asked to help them would just rather work in C#.  It will end up that the business users will go back to Access and Excel, data managers will just use SQL Management Studio, and professional developers will use Windows Forms or XAML in C#.

Apparently, Microsoft saw this too.

I am sad to see Quadrant go.  It was a beautiful application, and could have been a foundation to a number of very, very cool tools.  Hopefully the Data team wil find another use for the technology.

It should be noted that SQL Server Modeling is not necessarily dead at all.  Models can still be built with M, and exposed to the world in OData.  Applications can still be built against this model using Visual Studio, and the data can still be managed using SQL Management Studio.  

The loss of Quadrant doesn't impact this vision, and I hope that Microsoft realizes this and continues down the path toward an enterprise-class repository.  It's the last piece of the puzzle that keeps large enterprises from deploying SQL Server in application-centric environments.

 

Tags:

Biz | Enterprise Architecture | M

Without writing a single line of code

by Bill Sempf 3. August 2010 10:15

There has been a recent influx of simplified integrated development environments in a number of environments.  The goal of these IDEs is to make it possible for Line Of Business users (LOBs) to build data driven applications easily and simply.  This is an admirable goal, but there are a few problems.  For some reason, even though the problems recur again and again, the same mistakes are being made.

First is the assumption of the needs of the user.  In a boxed IDE like Microsoft Access or the new LightSwitch, the user only has the tools that are given to them.  The moment that the requirements change, a blackbox is introduced.  Sure, you can build a custom control to show the flash ad in your advertising management application, but the moment that a code change needs to be made, when a flash version changes or whatnot, the dev can't be found, the control isn't in TFS, no-one knows how to fix it, what language it is in, or anything.  The whole app goes down the tubes beause one custom component was lost.

Second is application lifecycle.  Applications like LightSnack ... er ... LightBeer ... uh ... LightSwitch have a short shelf life.  Need an example?  Infopath.  A number of companies bet the farm on Infopath.  Where are those apps now?  The bit bucket.  Yes, I know InfoPath is still around, but it isn't an effective technology anymore. Do you really want to bank on the existence of LightSwitch in two years, much less twenty?  I don't.  Sure, you can 'graduate' the code base to Visual Studio, but how does that code look?  How aboutwhen a VS upgrade comes around?  Will it hold together then?  And I am not picking on LioghtSwitch - Access has all the same problems.  I recently spent weeks at the Ohio Department of Health upgrading an Access 2003 application to Access 2007 when 2010 was already out.  Shelf life of a tightly integrated IDE has to be taken into account.

Third is the famous "Just because you can doesn't mean you should."  You can't build EBay in WebMatrix (or even the Original Web Matrix), but it doesn't keep people from trying.  Then when the business is depending on it, the failure becomes evident through a scale problem or a requirements or scope shift, and then the 'fix' becomes an emergency.  This is just not a good idea, but it seems that no one will take a moment and consider the implications either when building the IDE or planning the applications.

Finally, this flies in the face of every architectural best practice out there.  Here.  Take my data and just write something in some generic tool to edit it.  What?  That's not how I want my organization to be run.  You may not edit that data without using the controls provided, I am sorry.  I don't want ot have to manage 100s of little applications, built on tens of little IDEs either.  That's not how Enterprise Architecture is supposed to work.  So you think enterprises won't try and use this?  See point three above.  If they can they will. (hat tip to @srkirkland)

Unlike a lot of developers, I don't have the 'I'm a professional developer and I write code so I think drag'n'drop tools suck."  I am not like that.  I am a pragmatic guy.  I use simple tools for simple organizations' simple problems all the time.  But I go in knowing that the solution has a limited lifespan.  Honestly, the tools that are coming out today won't be used like that.  They will be used like Infopath and Access, to write LOB applicaitons that will become essential, and then go stale and have to be rewritten in a hurry.

These kinds of IDEs lead to the kinds of practices that lead to failed IT strategies.  Consider carefully before using them.

Tags:

Biz | Enterprise Architecture

Husband. Father. Pentester. Secure software composer. Brewer. Lockpicker. Ninja. Insurrectionist. Lumberjack. All words that have been used to describe me recently. I help people write more secure software.

Find me on Mastodon

profile for Bill Sempf on Stack Exchange, a network of free, community-driven Q&A sites

MonthList

Mastodon