Categories
Geeky/Programming

When To Code To an Interface

When to code to interfaces? In my opinion only when you have to “INTERFACE” with a 3rd party component, or some external piece you might have to interact with. Writing an interface for every concrete class seems way to redundant. It is probably easier to convert a concrete class to an interface when you need to instead of coding and Interface and Class or every entity object you want to created. What you end up doing is just duplicating code that you will never use.

Oh yeah, your UML (who even uses UML?) will look good, but its usefulness is lacking. I say write interfaces for things like File System interaction, Database interaction, some other 3rd party or external thing you need to interact with. Then you can easily swap out the backend later
if you need to.

I just don’t get writing say, and IPerson interface for a Person object. They are just going to be exactly the same. Down the road I don’t see you swapping it out for a new “Person”. Maybe but at that point, you might as well just create your IPerson and then create your APerson, BPerson that use the IPerson interface.

I guess what I am saying is follow YAGNI (You aren’t gonna need it) principle, and you will see the benefits in your code.

Categories
Geeky/Programming

Don't Be Afraid to Question "Why Are We Doing It This Way?"

"A boy asked his mother how come she cuts off the edges of a pot roast when putting it into the pot. Mother told him that that’s how her mother taught her to do. So, boy went to his grandmother and he got the same answer. Then he went to his grand-grandmother and ask her the same question. The answer was: Well, back then my pot was to small and the meat didn’t fit inside." – Steve Maguire’s book Debugging the Development Process.

Sort of playing off my last post

IT and Development Best Practice: Just Because You Can Doesn’t Mean You Should..

You shouldn’t be afraid to as "Why are we doing this again?" Usually in business and IT/Development the answer is: "That is the way it was when I got here, so we just kept doing it that way." Now I am not saying that every practice and procedure in place is bad or wrong, what I am saying is that you should not be afraid to ask why a certain thing is done the way it is done.

"Why are we using batch files to do XYZ?" – now we can use VBS/C#/PowerShell/One Line CMD, etc

"Why are we using MS Access as a backend?" – now we can use SQL2005!

"Why do we have 18 steps to get something approved?" – now we can streamline it and speed up everything!

"Why am I doing more documentation than programming?" – documentation goes out of date 2 minutes after it is completed, let’s self-document our code with unit tests!

"Why do I spend more time in meetings that actually working?" – do you really need to be in all those meetings? Can it be solved without a meeting? Via Email? Phone? Small face to face talk?

and the list keeps going, but you get the idea…

Always question "Why?" and sometimes you will see that things are just being done because that is the way it has always been. Don’t be afraid to change things when you do see they need to be as well. Like it is always said – "there is always room for improvement"

Technorati Tags: ,,,,
Categories
Geeky/Programming

IT and Development Best Practice: Just Because You Can Doesn't Mean You Should..

One thing I have learned over the years in IT and Development is this: Just because you can do something doesn’t mean you should.

What does this mean? It means that sometimes software and programs and hardware will let you do thing/configure things in such a way that is possible in the software, but that doesn’t mean you should do it.

Some examples:

1) Development – You can add a gazillion button controls to a form. Your development IDE doesn’t complain at all. You run your program and it totally dogs or has weird issues. Why? Because common sense tells you that shouldn’t put many controls on a form, you need to redesign! Some guru’s (Aaron Ballman, Raymond Chen) on the subject have blogged about this and it is talked about all around the web

2) Networking – Windows 98 (and other OS’s – I just know this one is from example) allows you to set two gateways on your adapter. Does this make sense? Two DEFAULT Gateways. Shouldn’t there just be one? I have seen first hand two gateways that don’t talk to each other and then end users sometimes could connect to the Internet, and sometimes they could connect to internal stuff, but not at the same time! Doh!  Chris might be able to add more to this as I am not a networking guru, but I know it just isn’t right.

3) Data Warehousing – Now, The way SQL Server Analysis Services is set up, you have your SSAS server. Then you can make multiple "databases" under that instance. Sort of like regular SQL Server, Instance->Databases->Objects. The thing is, under an SSAS Database, you can create multiple cubes. Now, there might some small instances where you want to do this, but just because the GUI/API lets you create multiple cubes under a SSAS DB, doesn’t mean you should! For one, you can’t share linked objects between SSAS Databases, because both cubes are in the same DB. The other thing is that if both cubes are tightly bound or are mutually exclusive, then you run into MAJOR pains when trying to deploy/process, etc. You risk taking one cube offline because you are having a deployment issue with the other cube. Keep your cubes in separate SSAS databases! 🙂 I ran across this the other day which finally put the nail in the coffin on this issue for me

I am sure there are many more instances where there is the ability to configure or do something but you shouldn’t. It really can lead to major headaches and issues for all involved if common sense isn’t used beforehand. Although sometimes there is an unknown factor and you just have to decide, but then later when you realize it you should go back and fix it (that is probably a good topic for another post in itself!)

Keep geekin!

Technorati Tags: ,,,,,,,,,,,,,
Categories
Life Random

Time to Reset?

The xkcd comic today says it all, which actually got me thinking about writing this post

 

Funny, yes. I think sometimes everything just needs to be reset back to zero. I remember back in the day, trying to beat Metroid on the NES and having to leave the NES on for days without shutting it off. Sometimes it would lock up – hit reset. Your computer every day, you probably reboot – time to reset! Development projects – usually they get to a point where there is so much bloat – for small programs and large (Vista was Microsoft’s attempt at a reset) – and you just say, lets start from scratch again, we can do it better.

High school to college – reset. When you move somewhere new – reset. Every year you have your birthday, xmas, new years – reset’s. Sometimes in relationships it is best to just reset – start over, forgive and forget, get back to ground zero – reset. Every day you wake up is another reset, another day to try something new, make something better.

Sometimes things just need to be reset, just to be reset – like the sign above. 2008 is right around the corner, and the new year is usually a good time to reset those bad habits, or just reset goals and timelines, and just get a fresh look on everything going on, should be an exciting year…

Technorati tags: , ,
Categories
Geeky/Programming

Programming Home Projects – Like Playing Nintendo?

Ever since I started programming, I have always had some crazy idea on the side that I would be working on, some project, some program I could write. A few have seen the light of day (Fat Finger Media Center, Pocketblogger, amongst others..). I was thinking tonight, and it dawned on me. Doing development projects on your own, at home, is like playing Nintendo. Why? Well I say Nintendo because that is what we did when we were 8. It is like people that call all types of soda pop “coke”, I call all video games “Nintendo”, ok so we have that down,

Doing development projects on your own, at home, is like playing Video Games.

How is like playing video games? Well, everyone who has played video games knows about it, and here is how it goes. You have this awesome game, but it is 1 player. You and your buddy can play together by switching off when you get killed, or if one guy is better than the other, then the other guy just watches, helps, looks for stuff the other guy will miss, goes and gets chips, beers, looks stuff up on the net, etc, a video game co-pilot if you will. Always have to get that Simpsons reference in there too (From the Episode: Alone Again, Natura-Diddily)

BART (playing a Christian video game while consoling Rod and Todd): Ooh, full conversion!
ROD: No, you just winged him and made him a Unitarian.
TODD (after Bart beats the first level of the video game): Can we play now?
BART: We are playing. We’re a team.
ROD AND TODD: [pause] Yay!

See, Rod and Todd are having so much fun, and so is Bart 🙂 – And also, don’t forget cooperative video games! Working together to get to the end – perfect!

image

No, seriously though, to me it is the same as doing home dev projects. When you do them yourself, it just isn’t as fun as doing them with someone else, as a team. People can bring different skills to the table, which make things better. (artistic abilities for graphics for example, is something someone else could totally bring to the table, even networking, server setup, backend stuff)

Now, if I could just find some motivated people with some extra time that want to learn how to do some cool stuff, and end up making some cool stuff in the process, all the while having fun, well, then, we would be playing, we would be a team.

BTW: I have a couple cool projects I want to work on, I have them in my head or semi-started, just need to get motivated!

Technorati tags: , , , , , , ,
Categories
Geeky/Programming

Source Control At Home: Subversion (SVN/TortoiseSVN)

Today, Joel asked me what to do to get source control going at his new job since they don’t have any. He mentioned I have never blogged on SVN or TortoiseSVN at all, so , here goes 🙂

Currently I am using Team Foundation Server (TFS) – which is nice, integrates with VS2005, etc. But really it is only good if you are using VS2005, otherwise it is a pain. What if you have older Classic ASP apps, or PHP or whatever?

This is where TortoiseSVN comes in – I have used it in work scenarios, as well as at home. Easy to set up, and easy to use, and it is pretty scalable if you go bigger, sites like SourceForge now use it.

First thing, you want to download TortoiseSVN here – you can just get the SVN client, its CMD line, works, but is a PITA if you like Explorer Shell integration – use Tortoise.

Once you install TortoiseSVN, it asks you to restart, if you are lazy, just kill explorer.exe and then ctrl+alt+del, task manager, and file->run explorer.exe to get it back, basically it just needs to restart that process to add the shell integration.

Now, you want to create a repo. Right click on inside an EMPTY folder, in the whitespace – you will see some more options, SVN Checkout and TortoiseSVN, then a sub menu.

image

You want to “Create repository here…” just use the defaults and hit ok, it should tell you have a repo! I made mine

file:///C:/Users/steve.novoselac/Documents/repo

Now if you go to a different folder, and right click, TortoiseSVN->Repo Browser and put your file path in there, you can browse your repo, create folders, etc. Now, you need to import files/project, and then check them out somewhere.

The best thing to do is to go to a project folder say, MyProject, right click, TortoiseSVN->Import , put the path to your repo, a note of “Initial Import” and hit ok. Let it chunk through importing and then hit OK

You are now ready to check out and use the source controlled files. Go to a new folder, called Projects or whatever you want, just somewhere else besides where you are at, and then right click, SVN Checkout. You can browse to your repo, find the folder you imported and then checkout. It will put that in your new folder and there will be little icons on all the files, green icons, because they are good to go.

From here you can modify files, and they will have little red icons, and then you can revert or check in those changes to your source control repo.

Now, with VS2005 (and VS2003), when you build a project, the /bin and /obj directory change every time, and if you are in a team environment, the .suo (user options) file changes too all the time, You want to remove these from source control or you are always going to see a little red icon on the highest level folder. Its best practice to remove any file that changes from some outside force (another common one is in a picture directory, the Thumbs.db file for example)

I left out a lot of smaller details about checking out, checking in, etc, but it is pretty self explanatory. My advice would be to set up a test repo and fool around with it before you put any of your prized projects into it, or make a new repo once you get the hang of it. By the way this is just a “File Based” repo, you can also set up “Web Based” if you have Apace running, but who the heck would run Apace? 🙂

Technorati tags: , , , , ,
Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

Business Intelligence and Analysis Services in the "Real World"

A reader sent me an email this weekend:

I wonder if I could as your advice as a BI / Data warehousing specialist.I have been studying Analysis Services recently having read a couple of books Which step me through the process of building the cubes etc but as I don’t come From a DB background one thing is not clear to me is how does one determine that They need to use BI / Analysis Services etc in the real world? As you,  I am a .NET developer with a background of building thick client apps and am  Familiar with creating layered architectures etc building on frameworks like NHibernate  Etc to abstract out the DB stuff into my more familiar object world.   My question how does one Generally interface with this Data Warehousing / Analysis Services stuff in the real world? I am looking for answers from people who have used these technologies in anger and not Like me from canned text book examples scenarios etc. Thanks for your time it is would be appreciated.

 

And I wrote an response, but after reading the response I figured I could post it up here as it is pretty general.

Basically what I have seen is this. You make your application (either web or windows) that saves transactional data, or logs, or something like that. End users usually want reports off of that data. At first, developers will report directly off that data (OLTP database). It becomes slow, and unusable after tons of data is there, so then developers tweak timeouts, things like that. Then the next step is custom summarizations into other tables, and then archiving off transactional data. Usually, because developers don’t know about data warehousing/BI stuff, all this stuff is custom up to this point.

 

Eventually, someone realizes that, “hey, there is a way to do this better”, which involves a structured ETL either using stored procs or SSIS or something similar. Also, instead of just ad-hoc or custom summarization tables, a well designed data warehouse (OLAP) database is key.
Now, from there, you can write your reports off your OLAP database, which is OK because the data is summarized, scrubbed, etc. But you really give it an adrenaline boost when you create a cube off that data warehouse OLAP db, it takes care of the summarization, the relationships, all that. You put the reporting in the hands of your end users (excel 2007 for example) – let them pivot and slice and dice the data, its all ready and set for them to do it, with really nothing you have to do on your end except make sure the cube is processed regularly off the DW.

You are basically abstracting your OLTP data up two levels… In all reality you should be able to query your OLTP data for a time frame and get say revenue, and then query the OLAP and the CUBE and get the same results. Now, with your cube, you can still access data from it in your .NET apps using AMO, which is cool as well, or you can write Reporting Services reports directly off the cube as well, makes it a lot easier than writing custom .NET reports.
So, interfacing with your Data Warehouse, the best options to get data in is usually SSIS packages or stored procedures, no .NET coding really. To get data out, you would probably want to use Reporting Services, or you can query it with SqlClient in .NET apps like you would a regular OLTP database.
The cube, you can get data out using AMO objects in .NET, you can query using MDX, XMLA, etc in management studio, or you can write reporting services reports, but the best client is Excel 2007, its built to be tied to SSAS 2005, so it really works out well.

One thing about all the technologies (SSIS, SSRS, SSAS, AMO, etc) is that there really isn’t a lot online as far as examples and documentation, or it is scattered. It is getting better though. Most of the stuff I do I just have to figure out on my own, through trial and error, but it is OK.

Maybe not the cover all response but it kind of covers what I have seen in the past, from my own experiences, and others.

Technorati tags: , , , , , , , , , , , , , , , , , ,
Categories
Geeky/Programming

Unprotected Wi-Fi: Encrypt your traffic with an SSH SOCKS Proxy to Browse Securely

Unprotected Wi-Fi: The bastion of coffee shops, airports everywhere. Browsing on these hot spots is basically like having unprotected sex with the Internet. My new solution:

image

Just kidding. Anyways, if you do browse on an unprotected hotspot, it is very easy for anyone to see all your web traffic, your passwords, your email, basically everything you do. They can save this info, then go home and get into all your accounts, basically take over your life if you give away the right info. You don’t want that do you?

Now, when I decided to finally get secure, I did some research, looking on Google, I figured someone had already done this and documented it well. The best and most comprehensive thing I found was on Lifehacker with an article in their “Geek To Live” series. Now I went through these steps, and I had some issues getting things working. I followed their steps to the letter, but it still didn’t work. I am on Windows Vista, which in the comments of the article, looks like other people had issues as well. We will get to that later 🙂

The Lifehacker article has you use Cygwin for all the SSH stuff. Pretty much this is what doesn’t work on Vista, at least from my conclusions. Over the last week or so I have been working with network guru Chris Super (my loyal tester) to get this whole setup working, and he came to the same conclusion. So, what do you do when Cygwin doesn’t work and you are running Vista? Well there are some other tools you can use to get this all running smooth. And a side note, Cygwin – ugh, why don’t you have an uninstaller? So 1996….

Step 1: SSH Server

First you are going to have to set up a SSH server. I have a Vista box at home sitting under the TV, the perfect candidate. Instead of Cygwin and configuring stuff with a command prompt, you can install a cool looking GUI SSH Server, freeSSHd – this program really is cool. First, they are using components from WeOnlyDo software, which I have used before in some of the .NET networking tools I have written. Second, they make this really easy to set up and configure. You install it, add a user (NT auth or regular), set some options for tunneling and access and you are set. If you have issues with this step I can help you out, but the options are pretty self explanatory. One thing I found is that when you add a new user, you need to restart the service for the user to work. One other thing I did was run my SSH server on a different port than the default (22) as people just try to hack this port all day. Pick something way up in the list 22822 for example.

Step 2: Dynamic DNS

The second step, unless you are running in a datacenter, is to make it so you don’t have to connect to your IP address. Instead, we want a cool domain name. What I used for this is Dynamic DNS. Chris actually blogged about this a while ago, which reminded me of the service. They have come a long way since they first started, which is nice. What you do is sign up for their service, and then install their updater tool on the same computer or another computer on your internal network. How this tool works is it checks on an interval your remote IP and updates the Dynamic DNS service. Pretty cool. Now you can remember a human readable domain name instead of your IP address!

Step 2.5: Configure your Home Router

Now that you have your SSH Server running, and your domain name pointing at your cable modem, you want to configure your router. Most if not all routers have a way to forward ports to internal IP Addresses. What you want to do is allow the port you configured in step 1 (22822) to forward to the internal IP address of your SSH Server box. That way, when you do requests to your SSH server from outside your internal network, the traffic will go to the correct box. Save your settings and you are good to go.

Step 3: SSH Client

Here is another place where Lifehacker’s steps didn’t work for me, because of Vista again. Cygwin really doesn’t work worth a damn on Vista it seems. A really good SSH Client that works on Vista is Putty. There isn’t even an install, it is just an exe. Awesome. Basically what I did was create a batch file to run putty with the command line options I wanted. The major caveat to get this to work is you need to run putty as an admin. I have that already set up on my box so no issue, but you might need to run a cmd prompt as administrator to get this to work!

One line in the batch file:

putty -D 9999 -P <the port you configured from step 1>-l <login name you configured in step 1> -ssh <your domain name from step 2>

Replace the pieces in <> with your values. The 9999 in the command is the local port that your client applications will connect to, which then gets forwarded out to your SSH server through your domain name. We will get to that in Step 3.

Once you run putty, it should ask you to login with the password you created in step 1, and you are good to go. You need to have tunnel set up for your user in the SSH Server. You might have SFTP and Shell also set up, so you will see putty show you a command line. This is the command line on the actual server on your internal network! You should now be connected to your SSH server, but yet, you still aren’t secure, because no applications are set up to use the new proxy yet.

Step 3: Configure Client Applications

Now you can configure your applications on your laptop to use your new proxy. The major applications you need to configure are your Internet Browsers. Firefox and Internet Explorer.

In Firefox, go to Tools->Options: Advanced Tab, Network Tab, Settings Button. Check the radio button for “Manual Proxy Configuration”. in the SOCKS Host area put localhost (you might need to put 127.0.0.1) and then the port you configured in step 1.

In Internet Explorer (7.0), go to Tools->Options: Connections Tab, LAN Settings Button. Check the box to “Use a proxy server for your LAN…”, click the Advanced button, in the Socks area, put localhost and the the port you configured in step 1

Wow, tons of steps to just change a little setting! I have been playing with a way to automatically set these up based on your local IP Address but haven’t perfected it yet. Once I do, I will post up an easier way.

Other applications you might have on your machine are email, IM, etc. As far as email, you might want to use a web mail client at this point. Also, for IM, you can configure them all to use SOCKS, but when I am the coffee shop I use a web based IM like Meebo because since your Internet session in your browser is already secure because you configured your SOCKS settings in your browsers, your IM’s will also be secure. There are a few other applications that you might use, like Windows Live Writer, etc and they usually have a place to set up SOCKS settings. If an application doesn’t have a place to set up SOCKS, then you probably don’t want to use it.

If you do have a corporate VPN client, you can connect to that as it is secure, and then connect to corporate sites internally and email, etc. Usually corporate networks
have tunnel’s set up when you connect to VPN. All your “corporate” traffic will go down the secure tunnel, while other traffic (such as IM, Browsing, etc) will go down an unsecured tunnel. Now that you have your SSH server set up, basically you have 3 tunnels if you connect to VPN. Secure Corporate, Secure Public, and Unsecured Public (for the applications you can’t configure SOCKS for)

Step 4: Browse Securely

Now that you have your secure setup, you can browse with more confidence. You still need to be careful, but your traffic is pretty much unreadable my would be hackers. I tested this by running it on an XP Virtual Machine, while running Wireshark on my Vista box and all the traffic was unreadable.

 Once you get back home though, you need to reverse all the SOCKS settings in your client applications so you can browse again from your internal network. That is unless you want to connect to SSH from your internal network, but that is just overkill and bad performance.

As far as connection speeds, some people really complain that is slow. I haven’t really noticed. It is a bit slower, but I would rather it be a little slower and secure than fast and wide open. For casual browsing, reading feeds news, etc, it is fine.

Other Stuff:

I set up all this using a Vista box for the backend server and a Vista box for the client. In our testing we found that you need to run Putty as an administrator for it to work. I actually downloaded Ubuntu Linux 7.04 as a VMWare image, loaded up VMWare player and tested using the built in SSH client and that worked fine, so I knew my SSH Server was working. Also, I tested using a Windows XP SP2 VPC Image using Cygwin as the SSH client and it worked fine as well. So remember, if you are on Vista, you need Putty and you need to run it as an administrator!!

Since I have only been running this for around two days, there are still some bugs to be worked out. Every so often you might receive an error from Putty about an abnormal packet received. It basically disconnects you. You probably are fine since your client applications are still configured to use the proxy, so if you try to browse you will get an error, you need to shutdown Putty, and then reconnect to your SSH server, then you can browse just fine again.

I have tested this on Unsecured networks at local coffee shops, and as I write this blog post, I am sitting at Starbucks, connected to T-Mobile hotspot, securely tunneling through SSH to my server in my apartment, browsing securely – just need to login to the hotspot first, then connect to SSH, and change your client application settings.

Technorati tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Categories
Geeky/Programming Life Ramblings

IT Disaster Recovery, what the I-35W Bridge Collapse shows us

Now, I am from Minnesota originally. I drove over that bridge 3 days before it collapsed. It sucks, its a bad thing for the state, for the people involved, and for everyone who passed or is injured. It is a very sad situation that no one should have to go through.

What does the bridge collapsing have to do with IT? Well. It is a disaster. And like IT disaster scenarios, it gets the same “Oh my god we need to fix this” after the fact treatment.

MN Gov. Pawlenty announced an immediate emergency round of inspections of all of the state’s bridges, starting with the three that have the same structure as the crumbled Minneapolis span. Other Gov’s are having their bridges inspected in their states. People are running around going crazy about inspecting bridges that 3 days ago they could care less about. – What gives?

Really? Lets do something after the fact. The bottom line is that these kind of action plans should have been set up beforehand. Just like in IT. Backups is a good example. No one says or does anything or wants to spend any money on backups. Then one day, the server crashes and everyone loses their files and email. I will bet money the next day there is a huge budget and people running around like idiots getting a backup plan in action.

Where were those people before hand? We know that stuff needs to be backed up. We know that bridges need to be inspected. WTF are we doing? If we know the possible problems, and we know how we can prevent them, then why do we let things slide. Where is the accountability?

The government needs to step up. People that are leaders/decision makers need to step up. And if something does go awry, they need to take responsibility for what happened. Wether it is a bridge that fell, or a server that crashed, or any other disaster scenario.

Technorati tags: , , , , , , , , ,
Categories
Geeky/Programming

Real World IT: Backing Up Cisco Routers using .NET

Usually, in a company, there is a “development” department and a “IT” department, and usually the departments don’t really work together. Development is focused on delivering business value through coding applications for end users or B2B clients. IT is busy making sure the corporate network is humming along, and that other internal issues related to technology in general are taken care of.

In my experience, I like to jump the threshold between the two departments. I started out working Helpdesk (IT Dept) and coded in the time I had free, eventually starting/breaking into the Development side. But, my passion for internal IT functions didn’t slow. Some of the guys I worked with in the IT Department always wanted applications to do specific things for OUR network, things you can’t buy, or things that you can buy a generic application for way to much money and it won’t work exactly how you want it to. That is where developers and IT can actually work together and bridge that gap.

This post is about backing up configurations on Cisco routers using .NET (C#). Now, most developers programming away on business applications really don’t care about the routers inside their company. They know they are there, might know somewhat how they work, and as long as they work, its fine – that is what IT is all about. But on the other hand, the Network Administrator really cares about Cisco routers. He dreams about them. Names his kid Cisco, or Switch.

Now, the network admin can login to all his routers, and run some commands to backup his configs. The most usual way to do this is to send the config to a TFTP server. Now, if they want a backup once a month, and they have one router, well then great, a manual solution is fine. The network is probably not big or complex and the network admin needs something to do. In most cases though, they would want to back up their routers daily, and they might have multiple routers.

In this scenario, let the network admin set up the TFTP server. Those are abundant and easy to find, easy to setup. What we are concerned with from a development standpoint is actually logging into the router, running commands to backup the config (to the TFTP server) and getting out.

Now, a few things are needed from your network admin. First, you are going to need the IP addresses of all the routers. Next, you want to make sure that they have one user on all the routers with the same password that you can use just for this backup program. There are multiple ways I am sure they can do this, and since I am not a network guru, leave that to them – they will throw out terms like RADIUS, etc, but it should be easy for them. Next, you need them to make sure that all routers are set up the same, as far as the way they use “enable” commands, etc.

The first thing you want to do is take that information from your network admin, and then test each one manually. Telnet (or SSH if you can get that working) using the IP, login with the user and password, and then run the enable command, and look at the strings that are responsed back to you. Every router has a name like

company-router-123>

where the > is the prompt. You need to jot down this name to go along with the IP address. Now you can get fancy later and have your network admin set that name up in DNS and then you can just have a list of names, but start with IP addresses first.

Now, here comes the developing part. A long long time ago, right when .NET hit the airwaves, I created a class library called Winsock.Telnet so I could use it. Named it Winsock because I was a VB6 developer and I used the Winsock control to do telnets within my programs, so it just made sense. I still use this library today, and I do have the source code to it somewhere buried on a backup DVD or server in my apartment, and finding it would just be a wasted effort at this point, but the class library works, so that is what matters. I use this class library to do my telnets. (To do SSH I have used WeOnlyDo’s .net SSH Client – Chris Super blogs about how to run SSH on your network yet still use telnet for a specific purpose – such as this). You can get my Winsock library here.

Here is the guts of the main method to log a config from a Cisco router. Steps are easy. Connect, login, enable, run the TFTP command, send in the TFTP address, and a path , then exit. The second half is extra credit. I actually set up a SVN repo to the directory on the server that I TFTP the configs to, do a SVN diff, and if different, I email the changes to the network admin. But everything up the “exit” command would get you buy. The Sleep(1) function just waits for a second, which with telnet you need to do, so you don’t overrun your self. I have included the methods to do the SVN diff.

 

        private static void LogRouterConfigTelnet(string deviceName, string ipAddress, string enablePassword)
        {
            _connectorTelnet = new WinsockTelnet.Winsock(ipAddress, 23, 60);
            _connectorTelnet.Connect();
            _connectorTelnet.WaitAndSend("Username:", _username);
            _connectorTelnet.WaitAndSend("Password:", _password);
            _connectorTelnet.WaitAndSend(deviceName + ">", "enable");
            _connectorTelnet.WaitAndSend("Password:", enablePassword);

            Sleep(1);

            _connectorTelnet.SendAndWait("copy run tftp", "[]?");
            _connectorTelnet.SendAndWait(_tftpAddress, "?");
            _connectorTelnet.SendAndWait("routers/" + deviceName + "/" + _filename, deviceName + "#");
            _connectorTelnet.SendMessage("exit");
            _connectorTelnet.Disconnect();

            // copy over svn copies, delete from root folder
            File.Copy(@"C:TFTP-Rootrouters" + deviceName + @"" + _filename, @"c:tftp-sourcerouters" + deviceName + ".txt", true);

            // do svn diff
            string diff = SVNDiff(deviceName + ".txt");

            if (!string.IsNullOrEmpty(diff))
            {
                System.Console.WriteLine(diff);

                // if different, commit to svn, email diffs
                SVNCommit(deviceName + ".txt");

                EmailDiff(deviceName, diff.Replace(Environment.NewLine, "<br>").Replace("n", "<br>"));
            }


        }

        private static string SVNDiff(string filename)
        {
            ProcessStartInfo psi = new ProcessStartInfo();
            psi.FileName = @"C:Program FilesSubversionbinsvn.exe";
            psi.WorkingDirectory = @"C:tftp-sourceRouters";

            psi.Arguments = String.Format("diff {0}", filename);

            psi.UseShellExecute = false;
            psi.RedirectStandardOutput = true;
            psi.CreateNoWindow = true;

            Process p;
            String output;

            p = Process.Start(psi);

            try
            {
                output = p.StandardOutput.ReadToEnd();
                p.WaitForExit();

            }
            finally
            {
                // shouldnt happen but lets play it safe
                if (!p.HasExited)
                {
                    p.Kill();
                }
            }

            return output.Trim();

        }

        private static void SVNCommit(string filename)
        {
            ProcessStartInfo psi = new ProcessStartInfo();
            psi.FileName = @"C:Program FilesSubversionbinsvn.exe";
            psi.WorkingDirectory = @"C:tftp-sourceRouters";

            psi.Arguments = String.Format("commit -m "config changed" {0}", filename);

            psi.UseShellExecute = false;
            psi.RedirectStandardOutput = true;
            psi.CreateNoWindow = true;

            Process p;
            String output;

            p = Process.Start(psi);

            try
            {
                output = p.StandardOutput.ReadToEnd();
                p.WaitForExit();

            }
            finally
            {
                // shouldnt happen but lets play it safe
                if (!p.HasExited)
                {
                    p.Kill();
                }
            }

        }

        static void EmailDiff(string deviceName, string diff)
        {

            MailMessage msg = new MailMessage();
            msg.To = "networkadmin@yourcompany.com";

            msg.From = "ciscoconfig@yourcompany.com";
            msg.Subject = "Cisco Config Changed - " + deviceName;
            msg.Body = diff;
            msg.BodyFormat = MailFormat.Html;
            SmtpMail.SmtpServer = "yourmailserver";

            try
            {
                SmtpMail.Send(msg);
            }
            catch (Exception ex)
            {
                System.Diagnostics.Debug.WriteLine(ex.ToString());
            }

        }

        static void Sleep(int seconds)
        {
            System.Threading.Thread.Sleep(seconds * 1000);
        }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }


 So, you can see, taking a little time to create a small program to do this is not really tough. And your IT department will be happy. It will also give you a reason to use things in .NET that you might not use everyday, especially if you are a Web Programmer, and also you will learn a little more about IT things (routers, networks, etc).

Note: you can see the code isn’t the prettiest, and really doesn’t need to be. There is some duplication yeah, and some hardcoded paths. If you are worried, release a 2.0 version with all that in the App.Config and refactor out a couple of methods. Or if you get really good, create a library called Utils or something with all the common functions you are going to use, like for calling processes, etc.

 

Technorati tags: , , , , , , , , , ,