jump to navigation

Fun with Powershell – Network connection reset Tuesday, 04/02/2013

Posted by Percy in Fun with Powershell, Technology.
add a comment

I realized that I have never posted one of the Powershell commands (or scriptlets) that I use most often. I use a lot of VMs for my every day development work. Often, the virtualized network connection can get out of whack with my physical network connection. So, I find myself running these three commands over and over:

ipconfig /release
ipconfig /renew
ipconfig /flushdns

It’s somewhat of a “scorched earth” approach to network issues, but it’s short, quick and it seems to work fairly well. I realized that these three commands are the same except for the parameter passed into ipconfig. So, now I just fire up Powershell and fire off this one liner:

@("release", "renew", "flushdns") | % { ipconfig /"$_" }

I’ve turned this into a function in my profile (Reset-Network), but I often forget it’s there and just type the whole thing, since it’s so short.

Anyways, short and simple, but pretty effective. One more reason I’m enjoying playing with Powershell.

Advertisements

Fun with Powershell, Part 5 – Organizing pictures Saturday, 05/12/2012

Posted by Percy in Fun with Powershell, Technology.
2 comments

After the birth of my son recently, there were (as you can imagine) a lot of pictures taken.  Luckily, my father-in-law loves to take pictures with his iPhone.  So, in a week or so he’s taken almost 100 photos – which is awesome.  Sarah and I have been so busy with just taking care of him, it’s been nice to have an extra pair of hands to take pictures.  The issue has been on how to get the pictures from his iPhone or iPad (where he has his own iTunes account) onto our machines (where we have our own iTunes account).  So, the syncing with iCloud just wasn’t going to work.  I had his iPad, but most of the pictures were from his iPhone.  So, the pictures weren’t physically on the iPad.  I could have saved them from the photo stream, but I didn’t want to add any more files to his iPad than he already had.  I tried a few ways of getting the files off (without paying for an app), and I finally settled on just e-mailing them to myself.  Since gmail has a attachment limit, I could only send 15-20 pictures at a time.  That resulted in almost 10 separate e-mails, which was fine.  Gmail also has a feature that allows you to download all the attachments from an e-mail into a single zip file.  So, now, I have about 10 zip files with 15-20 pictures a piece that I want to organize.  Further, each file has a similar set of file names (image.jpeg, image_2.jpeg, image_3.jpeg, etc.).

Now, I have my own way of organizing my pictures.  I have a folder of all my pictures with subfolders named “[Date in yyyy-mm-dd format] [Name of event]”.  Then, each picture within the directory is named the same as the directory but with a number suffix – “[Date in yyyy-mm-dd format] [Name of event] ###.[Extension]”.  You can see that here:

So, what I need is something that will do the following:

  1. Look through the list of zip files
  2. For each zip file, unzip the contents into a temporary folder
  3. For each image, find the date that the image was taken via the metadata
  4. If a directory of the format “[date] [name]” doesn’t exist, create it
  5. Copy the current image into the directory of the format “[date] [name] ###.[ext]” making sure to increment the ### correctly

I could do this all manually, but with ~10 zip files and ~100 images, that could take some time.  What do I do, I start writing a PowerShell script!  🙂

So, my first step is to throw all the zip files into a directory, and then just do a Get-ChildItem on them to iterate over each one individually.

Get-ChildItem {Path} -Filter *.zip |
% {
}

Ok, now I need to get a temp folder, and then unzip all of the contents of the current zip file into that temp folder. I found this article about how to unzip files using a powershell script. So, now, my script looks like this:

Get-ChildItem {Path} -Filter *.zip |
% {
	$tempFolder = Join-Path -Path $([System.IO.Path]::GetTempPath()) -ChildPath ([System.IO.Path]::GetFileNameWithoutExtension([System.IO.Path]::GetRandomFileName()))
	if(!(Test-Path $tempFolder))
	{
		New-Item $tempFolder -Type Directory | Out-Null
	}
	
	$shellApp = New-Object -Com shell.application 
	$zipFile = $shellApp.namespace($_.FullName)
	$destination = $shellApp.namespace($tempFolder) 
	$destination.Copyhere($zipFile.items())
}

Ok, now I can loop through each file that I’ve unzipped. I just need some code that will pull the date taken from the image file itself. I found this article which links to this folder which contains a script called “ExifDateTime.ps1” that does what I need. So, adding that to my script, I now have this:

Get-ChildItem {Path} -Filter *.zip |
% {
	$tempFolder = Join-Path -Path $([System.IO.Path]::GetTempPath()) -ChildPath ([System.IO.Path]::GetFileNameWithoutExtension([System.IO.Path]::GetRandomFileName()))
	if(!(Test-Path $tempFolder))
	{
		New-Item $tempFolder -Type Directory | Out-Null
	}
	
	$shellApp = New-Object -Com shell.application 
	$zipFile = $shellApp.namespace($_.FullName)
	$destination = $shellApp.namespace($tempFolder) 
	$destination.Copyhere($zipFile.items())

	Get-ChildItem $tempFolder | 
	% {
		$fileStream = New-Object System.IO.FileStream($_.FullName,
		                                            [System.IO.FileMode]::Open,
		                                            [System.IO.FileAccess]::Read,
		                                            [System.IO.FileShare]::Read,
		                                            1024,     # Buffer size
		                                            [System.IO.FileOptions]::SequentialScan
		                                           )
		$img = [System.Drawing.Imaging.Metafile]::FromStream($fileStream)
		$exifDT = $img.GetPropertyItem('36867') # Date taken
		$exifDtString = [System.Text.Encoding]::ASCII.GetString($ExifDT.Value)
		[datetime]::ParseExact($exifDtString,"yyyy:MM:dd HH:mm:ss`0",$Null)
}

Now, I throw in a little image manipulation via System.Drawing.Bitmap (rotate the image by 90 degrees, since most have been taking on a iPhone and they are vertical rather than horizontal), and then I copy the file to my custom output directory (creating the directory if it doesn’t exist). Finally, I delete the temporary folder. All in all, around 70 lines for doing some pretty hefty file organization is not a bad deal. Here’s the final output:

$root = [Root Directory]
[Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
Get-ChildItem $root -Filter *.zip |
% {
	$tempFolder = Join-Path -Path $([System.IO.Path]::GetTempPath()) -ChildPath ([System.IO.Path]::GetFileNameWithoutExtension([System.IO.Path]::GetRandomFileName()))
	if(!(Test-Path $tempFolder))
	{
		Write-Host "Creating $tempPath"
		New-Item $tempFolder -Type Directory | Out-Null
	}
	
	Write-Host "Unzipping $($_.FullName) to $tempFolder"
	$shellApp = New-Object -Com shell.application 
	$zipFile = $shellApp.namespace($_.FullName)
	$destination = $shellApp.namespace($tempFolder) 
	$destination.Copyhere($zipFile.items())
	
	Get-ChildItem $tempFolder | 
	% {
		Write-Host "Getting date taken for $($_.FullName)"
		$fileStream = New-Object System.IO.FileStream($_.FullName,
		                                            [System.IO.FileMode]::Open,
		                                            [System.IO.FileAccess]::Read,
		                                            [System.IO.FileShare]::Read,
		                                            1024,     # Buffer size
		                                            [System.IO.FileOptions]::SequentialScan
		                                           )
		$img = [System.Drawing.Imaging.Metafile]::FromStream($fileStream)
		try
		{
			$exifDT = $img.GetPropertyItem('36867') # Date taken
			$exifDtString = [System.Text.Encoding]::ASCII.GetString($exifDT.Value)
			$dateTaken = [datetime]::ParseExact($exifDtString,"yyyy:MM:dd HH:mm:ss`0",$Null)
		}
		catch {}
		Write-Host "Date taken - $dateTaken"
		$prefix = [string]::Format("{0:yyyy-MM-dd} [Custom Format]", $dateTaken)
		$destinationPath = "$($root)\Output\$prefix"
		
		if(!(Test-Path $destinationPath))
		{
			Write-Host "Creating $destinationPath"
			New-Item $destinationPath -Type Directory | Out-Null
		}
		
		$fileWritten = $false
		$index = 1
		while(!$fileWritten)
		{
			$destinationFile = [string]::Format("{0}\{1} {2:000}{3}", $destinationPath, $prefix, $index, $_.Extension)
			if(!(Test-Path $destinationFile))
			{
				Write-Host "Copying to $destinationFile"
				$i = New-Object System.Drawing.Bitmap($_.FullName)
				$i.RotateFlip([System.Drawing.RotateFlipType]::Rotate90FlipNone)
				$i.Save($destinationFile)
				
				$fileWritten = $true
			}
			$index++
		}
		
		$fileStream.Close()
	}
	
	"Removing $tempFolder"
	Remove-Item $tempFolder -Force -Recurse
}

Enjoy!

Fun with Powershell, Part 4 – Enumerations as function parameters Friday, 05/11/2012

Posted by Percy in Fun with Powershell, Technology.
add a comment

So, a lot of you guys who are already using Powershell a decent amount might already know this, but it came as a surprise to me. One of those “wow, I can’t believe it actually works in this awesome way” surprises, but still I had no idea.

So, I was defining a custom function where one of the parameters was going to be an enumeration from a compiled object – in this case I was adding columns (fields) to a Sharepoint list in code, and my function parameter was the type, which is of type Microsoft.SharePoint.SPFieldType. When I then called the method, I thought I had to use the appropriate syntax to send in the appropriate value (-ParamName [Microsoft.SharePoint.SPFieldType]::Integer). However, when I ran my code, I got an error about that property not being of the right type. So, I deleted the setting, and started typing. What do you know, I got intellisense for the indivdual types. Apparently, Powershell is smart enough to interpret the parameter as an enum, and only make you choose the specific type. My parameter call then became -ParamName Integer.

Like I said, that may not be news for anyone else, but it rocked my world a bit.

Fun with Powershell, Part 3 – Searching a codebase Wednesday, 03/28/2012

Posted by Percy in Fun with Powershell, Programming, Technology.
add a comment

One of the neat things about having Powershell in your toolkit is that it becomes an option when you need to solve a problem. This is a fairly simple code example, but I think it’s one that a lot of developers could use.

So, here’s the problem – we’ve got a fairly massive codebase. Some of it is “core” and some of it is client-specific. The “core” (we call product) is fairly complicated and has about a dozen “packages”, which each contain anywhere from about 5 to 30 assemblies/projects. So, when we’re trying to trace a code problem through it’s different levels, we may be opening a handful of solutions. That either requires multiple instances of Visual Studio to be open at the same time, or to only have one but loosing track of where you are in the stack.

Most of the time, you just want to figure out where a certain method is defined or where a certain class is used and pouring through the different code files/projects/solutions can become tedious. Too, it can be hard to locate a class given it’s name and namespace. For example, we have a “Components” package, but the class CIS.Components.Maintenance is actually in the “Services” package – for various reasons.

So, what can you do to find what you’re looking for easily? If you have all of the code downloaded on your machine, it’s as simple as a string search in a list of files – some people know it as “grep”. Powershell has a cmdlet that functions just like grep – Search-String. Combine that with a “Get-ChildItems -Recursive” command, and you can find anything in any file. Now, if I sprinkle in a little grouping and selecting so that we just get back a list of files containing the search string I get this:

Get-ChildItem {Code Root Directory} -Recursive -Include *.cs | Select-String "{Method or Class Name}" | Group-Object Path | Select-Object Name

Voilà! I can quickly find where things are located, then just peek in that file rather than pouring through projects and solutions.

Enjoy!

Fun with Powershell, Part 2 – Investigating code Issues Tuesday, 02/21/2012

Posted by Percy in Fun with Powershell, Programming, Technology.
Tags: , ,
add a comment

Since I missed posting something last week, I’m putting out two this week to keep up with my resolution.  This is the second post in a series of posts about the fun I’m having with Powershell.

Part of me putting this series of posts out there is to help people look at Powershell in a different light – as a developer tool. I think it gets labeled as a SysAdmin tool, which it definitely is. However, I’m finding more and more that when I turn to Powershell to help me, I end up getting the info I need quicker and easier and I write less code. This is another one of those issues.

So, I picked up an unfinished project for a client recently. The project is in the process of being QAed, but there were issues that had been found along the way. The original developer had since left the company, so it was up to me to figure it all out. One of these issues had to do with the rendering of a DataDynamics Active Report. The error that was being thrown had to do with displaying a logo. Basically, it was instantiating a Bitmap object using a stream from an embedded resource, but the stream was returning as null.

new Bitmap(Assembly.GetExecutingAssembly().GetManifestResourceStream("{SupposedlyValidReference}")); 

So, I started thinking about how I could look into what was causing this issue. I could set up the web application where this issue was being thrown and go straight to debugging. However, in this case, that was a bit more complicated than it sounded. Also, the issue was in a compiled assembly that wasn’t specifically part of the website. I could write a harness to call this report and debug that way. I actually already have something like this written and shared with some co-workers, but even that was a bit more complicated then I wanted. Too, I didn’t need to test the entire report, just this one line. I could write a console app, and pull this code out to see what’s going on. Yea, I could do that somewhat quickly, but not as quickly as I could write some Powershell scripts to see what’s going on. So, that’s what I did.

Here’s the script I whipped up somewhat quickly:

cls 
$assemblyPath = "{PathToAssembly}" 
$assembly = [System.Reflection.Assembly]::LoadFile($assemblyPath) 
$stream = $assembly.GetManifestResourceStream("{ValidPath}") 
$stream | Get-Member 

Since that was basically the part of the code that was throwing the error. Running that script threw the following error: “Get-Member : No object has been specified to the get-member cmdlet.” That tells me the same thing the error message from the app tells me – my stream object is null. So, I started looking at the path to the resource, and the actual file name. It turns out the file was named ####_Logo.gif, while the resource referenced ####_logo.gif. So, just to see if that was the issue, I changed my script to have the capital L in the resource path. Running that script gave me a valid System.IO.UnmanagedMemoryStream object for $stream.

That’s it. It was just a letter casing issue. However, instead of firing up a new instance of Visual Studio, creating a new console app, adding a reference to this assembly, writing the code and stepping through it, all I had to do was write a simple 5 line script. Really, it could have been 2 lines (Load the assembly, and get the stream).

If you missed out on Part 1,  here it is.

Fun with Powershell, Part 1 – Intro/Twitter API Monday, 01/09/2012

Posted by Percy in Fun with Powershell, Programming, Technology.
Tags: , , ,
1 comment so far

I’ve really been enjoying learning more about Powershell.  I’ve actually gotten to the point that I think about using it first, and then writing a full-fledged application second.  In the past, I’ve listed some of the things I’ve been able to do with Powershell, but I thought I might dig a little deeper and provide some more detailed info about the what and how of those kind of things.  Too, whenever I talk to developers or technology folks about Powershell, they all have the same reaction – “Isn’t that more for system administrators than for developers?”.  While I fervently agree that anyone administering any kind of system needs to know Powershell for their own sanity, it’s a great tool for developers as well.  So, I thought I would highlight how developers can use Powershell by throwing out some real world examples of how I’ve used it.  Also, I’m assuming that you know the basics of Powershell – declaring variables ($), calling methods on objects, piping input into commands, creating new objects, etc.  If you need more clarification, please let me know.

I don’t know how long this series will be, but here we go with part 1.

I was reading through my twitter feed the other day, and I came across this post from The Oatmeal:

“Is there a way to sort my Twitter followers by the number of followers they have? (in descending order)”

My first thought was “I’m sure I can do that in Powershell”.  Now, there are a number of web apps out there that will actually do this for you.  I even think the next tweet points to friendorfollow.com.  However, I thought it would be a neat learning exercise.  Before we begin, I figured I’d throw this out there – I honestly have no idea how this will work.  I don’t have some stock twitter API in my back pocket ready to use.  So, this will be “from scratch”.  So, here’s how I did it.

First off, I’m assuming that there is a twitter API, so I go to twitter’s website, and I click the link at the bottom labeled “Developers” (hey, that’s me!).  That takes me to dev.twitter.com, which has a link for Getting Started with the API.  Bingo!  After looking around there, I find that it’s just HTTP requests of the format “https://api.twitter.com/{Version}/{Controller}/{Action].{Format}?{Parameters}”.  So, in this case, the first thing to do is get the followers for a particular user.  So, the URL I’m going to use is https://api.twitter.com/1/friends/ids.xml?screen_name={your user name}, since I want to use the XML format.  That returns the list of all my followers.  So, now I need to get this into Powershell by firing up my trusted Posh IDE – PowerGUI.

With Powershell, you have access to any .NET library, including the ones in the core.  There’s a class called System.Net.WebClient that I think I can use.  Let me see what options I have:

New-Object System.Net.WebClient | Get-Member 

There’s a method called DownloadString, that takes in a string address as a parameter.  So, lets give this a try and see what happens:

$url = "https://api.twitter.com/1/friends/ids.xml?screen_name={your user name}" 
$wc = New-Object System.Net.WebClient 
$wc.DownloadString($url) 

That returns the XML response that I’m looking for.  So, now I need to put that string into an xml object, and see what options I have.

[xml] $data = $wc.DownloadString($url)
$data | Get-Member 

Now, one of the properties is called “id_list”.  If I look at the XML returned from the URL I passed in, that’s the root node.   So, by putting it in the XML object, Powershell has effectively serialized my XML into an object tree.  So, now I should be able to get each individual id.

$data.id_list.ids.id | 
% { 
	$_ 
} 

From that I can see the list of id’s. Alright, now I can loop through each one of my followers. So, the next step is getting the number of followers for each of those users. Luckily, there is another API call that can give us that information.  So, now, for each one of these users I want to call a URL of the following format – https://api.twitter.com/1/users/lookup.xml?user_id={User ID}&include_entities=true.

Note: The twitter API only allows 150 calls an hour per IP address.  I found this one out the hard way.  So, while you CAN loop through all your friends, I wouldn’t recommend it if you have more than 150 people your are following.

So, now I can do something like this within the loop:

$userUrl = "https://api.twitter.com/1/users/lookup.xml?user_id=$($_)&include_entities=true"
[xml] $userData = $wc.DownloadString($userUrl)

A little more testing tells me that the user name of the current follower can be found at $userData.users.user.screen_name and the follower count can be found at $userData.users.user.followers_count. Now, I want to see this data as a simple data set so I can sort the data. There’s a neat little trick I picked up while reading this article. You can dynamically declare an object structure by using a command similar to this:

$newObj = "" | Select-Object Property1, Property2, Property3

If you then do something like this:

$newObj | Get-Member

You’ll see “NoteProperty” types that correspond to the properties you declared earlier. Too, instead of doing multiple set operations on separate lines, you can do multiple set operations on one line – as long as you get the order correct. So, once I get the data for the user, I can do something like this:

$outputItem = "" | Select-Object Name, ScreenName, FollowersCount
$outputItem.Name, $outputItem.ScreenName, $outputItem.FollowersCount = $userData.users.user.name, $userData.users.user.screen_name, [int] $userData.users.user.followers_count

Now, if I just put $outputItem on a line all by itself, it’ll be returned as a result of this iteration of the loop. So, now my loop returns a data set which contains all the data I want to see. Just to make it a bit nicer, I can pipe it out to Out-GridView. That allows me to see the data and play with it all I want. So, if I put it all together, I’ve got a script that hits the Twitter API and will return a grid view of all of the people that you are following and their follower count in a little over 10 lines of code (and some of that simply for formatting):

$url = "https://api.twitter.com/1/friends/ids.xml?screen_name={your twitter name}"
$wc = New-Object System.Net.WebClient
[xml] $data = $wc.DownloadString($url)
$data.id_list.ids.id |
% {
    $userUrl = "https://api.twitter.com/1/users/lookup.xml?user_id=$($_)&include_entities=true"
    [xml] $userData = $wc.DownloadString($userUrl)
    $outputItem = "" | Select-Object Name, ScreenName, FollowersCount
    $outputItem.Name, $outputItem.ScreenName, $outputItem.FollowersCount = $userData.users.user.name, $userData.users.user.screen_name, [int] $userData.users.user.followers_count
    $outputItem    
} | Out-GridView

So, there is is. In the interest of full disclosure, here is my script, with the user limiter code included as well as some commented code I was using for testing purposes:

cls
# New-Object System.Net.WebClient | Get-Member
$url = "https://api.twitter.com/1/friends/ids.xml?screen_name=katman26"
$wc = New-Object System.Net.WebClient
[xml] $data = $wc.DownloadString($url)
$count = 0
$data.id_list.ids.id | 
% {
    if($count -lt 5)
    {
        $userUrl = "https://api.twitter.com/1/users/lookup.xml?user_id=$($_)&include_entities=true"
        [xml] $userData = $wc.DownloadString($userUrl)
        
        $outputItem = "" | Select-Object Name, ScreenName, FollowersCount
        $outputItem.Name, $outputItem.ScreenName, $outputItem.FollowersCount = $userData.users.user.name, $userData.users.user.screen_name, [int] $userData.users.user.followers_count
        $outputItem
    }
    $count++
} | Out-GridView

Let me know if you have any questions about this or any of the examples I’ve shown. I’m not a Powershell expert…yet. 🙂

Lovin’ me some POSH… Thursday, 09/01/2011

Posted by Percy in Fun with Powershell, Technology.
Tags: ,
add a comment

POwerSHell, that is.  Apparently the PS acronym is overused, so the folks at MS decided that PowerShell should be called POSH.

I started going through a PowerShell primer a few months ago, and while it’s a good start, you don’t really learn something until you start applying it.  At least, that’s usually how it works for me.  At the time, I didn’t really understand it, and so I didn’t understand how to apply it.

Then, about a six weeks ago, I started working on a task for my current project.  Basically, we were trying to figure out an easy way to set up our enterprise application which is fairly complicated.  I started by setting up the web piece locally, but quickly realized that I didn’t have all the necessary dependencies that I needed.  I was in a spiral of writing a console app to go through a directory of assemblies, load up each assembly to find all of it’s references, and make sure they all were downloaded.  Thank you, cyclic dependency in the .NET framework (System references System.Configuration, which references System, etc.)

After a few google searches, I came across an article describing a PowerShell script that could load up assemblies and query the references.  The article was describing a different problem then I was trying to solve, but I decided to see if I could tweak it to help us out.  The more I started playing with it, the more I started to realize the potential.

Long story short, I’ve been using PowerShell almost exclusively over the last six weeks, and I’m loving it.  So far, I’ve written scripts (including but not limited to)

  • …that will parse a directory of assemblies (.dlls and .exes) and, given a list of directories, will search and download any needed dependencies until all the dependencies are downloaded from the list of directories into the original directory (it’ll also search the GAC just to be sure) – it also validates that you are downloading a specific version
  • …that will download a core website, and the client customizations on top of it, download all the dependencies (see first script), create all the necessary virtual directories (configurable), create an app pool with custom authentication that can be passed in, links the website to the app pool, and updates your host file with the site name
  • …that will download all configuration files for both applications and websites and will drop them in the appropriate folder for our application
  • …that will look into all configuration files that are downloaded and make sure the data in them is accurate (database server, file locations, etc) for their purpose and environment
  • …that will change all configuration files for an entire environment of clients to be used in a local setting (changing any database reference to “localhost”, changing all file paths to a local version, etc.)
  • …that will query my google spreadsheets, download all the information, parse it, and return it to me in a format I can use for personal financial reporting
  • …that uses the producteev API, and will add tasks to a given dashboard based on whatever criteria I provide
  • …that will poll all of the websites on a machine and return all of the virtual directories in a common format
  • …that will parse the output from the virtual directory “audit” (above script) and will return data in a format that helps me determine what virtual directories are common and which ones are individual
  • …that, given a URL, will hit the URL at a frequent interval, and scrape the page to show updated information
  • …that, given a list of our clients (around 26), will get the latest successful build location from TFS, and download it to a common location for deployment, then checks all of the downloaded assemblies to make sure the version of the references are updated
  • …and a bunch of little scripts and commands here and there to help me gather information

I’m completely digging PowerShell.  So far, I haven’t seen a reason to write another console app – though it’s much more flexible and powerful than most of the console apps I’ve written.  You have access to full .NET objects, so the possibilities are really limitless.  It’s not just for IT Administrators.  It’s really for Devs and DBAs as well.  It’s not just a scripting language.  It’s definitely so much more.

Are you using PowerShell?  If so, how?  If not, why?  🙂

%d bloggers like this: