Thursday, April 14, 2016

Yammer Macro for Confluence

Yammer is a wonderful enterprise social tool and Confluence is a wonderful enterprise Wiki.  For a while I've searched for a native Yammer Confluence integration or a user macro for embedding Yammer into Confluence.  There's been some discussion on the Confluence boards, but none of the suggestions appear to work properly unless you enable the Confluence HTML macro.

Yammer's embed code works fine with the Confluence HTML macro, but it doesn't work in the cloud version and it's a security risk to give every user on your server permission to insert HTML/JavaScript code into your pages. For example, it opens you up to cross-site scripting attacks. So instead, I wanted to write a user macro that would allow users to embed their own Yammer newsfeeds on demand.

The issue is related to the way that Confluence handles scripting in user macros.  Unfortunately, Confluence seems to assume that you're using XHTML 1.x in your macros and so the native Yammer integration doesn't work. You get an error because Confluence wraps <![CDATA[]]> tags around your JavaScript code which breaks Yammer's JavaScript. This is leftover legacy from the days of the browser wars when everyone had their own standard and you had to hide incompatible code from older browsers.

So, without further ado, here is the macro:


  1. Macro Name: yammer-group-newsfeed
  2. Visibility: Visible to all users in Macro Browser (optional)
  3. Macro Title: Yammer Group Newsfeed
  4. Categories: External Content
  5. Icon URL:
  6. Documentation:
  7. Macro Body Processing: Rendered

Monday, January 18, 2016

Powershell Script to Enable Run As User in Windows 10

Windows 10 doesn't include a Run as Different User feature in the Start Menu by default.  This is a pain for those of us who need to elevate privileges within our corporate environments.

Fortunately, the fix is pretty simple: just add a registry key that enables the feature.  Unfortunately, I find myself having to do this on every computer I work from.  To make it easy for myself, I created a simple PowerShell script that I can look up here. So, without further ado, here's the script.

Tuesday, November 25, 2014

Compile a bash shell script with bashapp

Every now and then you want to obfuscate a shell script so that the user can't see the contents.  For example, if you're writing a script to check for security compliance.

Enter bashapp for Mac OS X!

Bashapp will turn your bash script into Apple .app file. To use it just follow these easy steps.

First you'll need to download the source code.

Download bashapp source code from GitHub

Next, uncompress the source code into a directory of your choosing.

Next, compile the source code into an executable:

In order to execute bashapp, you'll need to move it into your user bin directory:

To use bashapp, just execute it this way:

Bashapp is located online at:

Wednesday, November 19, 2014

Cannot Import-Module ActiveDirectory in Exchange 2010 Management Shell

I recently encountered this problem while writing some posh code for cleaning up terminated employees.  I am running Windows 8.1 and have the Exchange Management tools installed with SP3.  When I tried to import the Active Directory module I received he following error:

module cannot be imported because its manifest contains one or more members that are not valid

As it turns out, the issue is related to the fact that Exchange 2010 SP3 does not support Powershell V3 or V4 and Windows Management Framework 3.0+.  

The solution is pretty simple, just download Update Rollup 7 or higher for the Server 2010 SP3 Exchange Management Tools.

But that's not all folks!

The rollup resolves the issue with Exchange Management Shell support for Powershell V4 but it does not update the shortcut that launches the management shell.  You'll need to edit the shortcut to point it to V4:

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -version 4 -noexit -command ". 'C:\Program Files\Microsoft\Exchange Server\V14\bin\RemoteExchange.ps1'; Connect-ExchangeServer -auto"

Also note: every time you install a rollup, it replaces the management shell shortcut so you  may find yourself having to do this again.

Monday, December 5, 2011

PowerShell Script to Get MAC Address from ARP Cache

I was inspired by Shay in his blog post about obtaining a MAC Address.

I wanted something a little more purposeful so I extended it a bit.  Enjoy.

Thursday, September 22, 2011

Event Log Error checkODBCConnectError: sqlstate = 08001; native error = 11

I recently encountered this error on a customer server and wanted to hopefully save many folks the effort to resolve the problem.  If you just want the answer skip to the bottom of the page.  In my case, I had a two node, two instance SQL Server cluster with a failed instance.  When trying to bring the instance online, the service control manager would hang until the timeout period and then litter the event log with the following message:

MSSQLSERVER Error (3) 17052 N/A SQLDEV1 [sqsrvres] checkODBCConnectError: sqlstate = 08001; native error = 11; message = [Microsoft][ODBC SQL Server Driver][DBNETLIB]SQL Server does not exist or access denied.

The SQL Server error log indicates that the SQL Server started and was subsequently shut down by the service control manager.  If you search for this issue online you'll likely find this knowledge base article:  It suggests that something might be wrong with the name of your SQL Server cluster resource.  In my case, it wasn't the case at all.  However, I was not familiar with the environment, I couldn't be sure.  To get to the bottom of the issue, more research was necessary.

The issue lies in that a cluster service initiates a connection to SQL Server after the service control manager brings the service online.  If it's not able to verify that SQL Server is online, the cluster manager asks the service control manager to stop the service.  

So this begs the question, if the SQL Server error log indicates that all is well, why would the cluster think it's not?  Answer: it can't connect.  Specifically, if the service is a named instance the cluster manager needs to know what port to connect to.  

So why didn't it know the port number?  Because the SQL Browser service was not running.  Simple solution to what looked like a complicated problem. 

Thursday, March 31, 2011

jUpgrade for Joomla does not like Google Chrome!

I ran across this problem while upgrading Joomla 1.5 to 1.6.  I spent hours trying to figure out why jUpgrade wouldn't work. I enabled and disabled every plugin to try to find what was conflicting with jUpgrade.  It turns out that jUpgrade simply doesn't work with Google Chrome.  I switched to Firefox and voila, no problems.  I thought I'd put this note for Google to cache and hopefully save someone else the trouble I experienced.

Friday, February 25, 2011

Running a Powershell script in Vista/Windows 7 without prompting for elevation or security

Some time ago I needed to share a very simple PowerShell script with a user for a one-time execution.  Before I forget it I wanted to write it down.

At the time I didn't have the capability to sign the script but I didn't want to complicate things for the user by asking them to run an elevated prompt and re-configure PowerShell's execution policy for remote-signed code.  I looked around for a workaround and this is what I found.

First, the guys at Wintellect developed a simple workaround for elevation in the command shell for Vista or Windows 7.  Get a copy here:

Next, powershell has a command line option to bypass the execution policy.

Combining the two together in a batch file delivers a simple way to accomplish my objective:

elevate powershell.exe -nologo -noprofile -windowstyle hidden -command "& {set-executionpolicy Bypass -scope Process; .\myscript.ps1}"

Tuesday, January 25, 2011

T-SQL Viewing current processes and queries in SQL 2000

One of my customers recently inquired with me if there was a way to include the current query with the results of master.dbo.sysprocesses in SQL 2000.  Yes, there are folks out there still running production applications on SQL 2000!  After I finished cleaning the cobwebs out of my head I came up with this.  It's probably not the most elegant solution, but it gets the job done.  This made me appreciate the DMVs a little more.  By the way, this uses a function ( fn_get_sql) that was introduced in SQL 2000 SP3 so you'll need that before running this.  Also, to improve the matches, you would need to implement trace flag 2861 which allows SQL Server to keep zero cost plans in cache, which SQL Server would typically not cache. This flag was also introduced in SQL 2000 SP3.  The downside is you’d have a reduced data cache but the upside might be worthwhile. Either way, it’s a measurable change if you were to implement it. The code I wrote is below, royalty free.

Wednesday, January 5, 2011

SQL Server Database Log File Virtual Log File Sizing Considerations

SQL Server's database log growth algorithm automatically creates virtual log files (VLFs) with every log growth.  The algorithm tries it's best to carve out the virtual log files to the appropriate size.  However, left unattended log file growth can create an unnecessarily large number of virtual log files which as Linchi Shea has pointed out, can cause performance problems.  Out of curiosity, I created a couple of tables to see how SQL Server's log would grow if you set it to a fixed size with an equivalent auto-growth size.  So, if you haven't set your database logs minimum size and auto-growth size, you might want to check out how many VLFs you have out there by running DBCC LOGINFO in the database in question. The number of rows returned is how many VLFs your database has.  Keep in mind that the default auto-growth amount is 10% which makes the growth recursive and therefore, variable in size during each growth interval. NOTE: Before you run out and make your databases use very large auto-growth sizes consider this: Kimberly Tripp suggests 8 GB growth intervals for log files, but in my testing I've seen some (slow) servers take several minutes to zero initialize an 8 GB log file, so use with caution.   However, another server could perform that operation in 20 seconds.  Finally, be aware that there's a bug out there for growing in increments of 4 GB.

 # VLFS 
 IN 10 GB 
IN 20 GB
IN 30 GB
                 2 4                  1      10,240            20,480      40,960      61,440
                 4 4                  1         5,120            10,240      20,480      30,720
                 8 4                  2         2,560              5,120      10,240      15,360
               16 4                  4         1,280              2,560         5,120         7,680
               32 4                  8            640              1,280         2,560         3,840
               64 4                16            320                  640         1,280         1,920
            128 8                16            320                  640         1,280         1,920
            256 8                32            160                  320            640            960
            512 8                64               80                  160            320            480
            768 8                96               53                  107            213            320
         1,024 8             128               40                    80            160            240
         1,536 16                96               53                  107            213            320
         2,048 16             128               40                    80            160            240
         3,072 16             192               27                    53            107            160
         4,096 16             256               20                    40               80            120
         5,120 16             320               16                    32               64               96
         6,144 16             384 N/A                    27               53               80
         7,168 16             448 N/A                    23               46               69
         8,192 16             512 N/A                    20               40               60
         9,216 16             576 N/A                    18               36               53
      10,240 16             640 N/A                    16               32               48 

To set your database auto-growth size, use this query as an example:

ALTER DATABASE [mydb] MODIFY FILE ( NAME = N'mydblog', SIZE = 4096MB )

Keep in mind that tempdb gets re-created during start-up so making this change will be temporary in TempDB until the next reboot. If you run something like DBCC SHRINKFILE( templog, TRUNCATEONLY ) you'll revert to whatever you had before you ran ALTER DATABASE.

Friday, December 3, 2010

New DMF for SQL Server 2008 sys.dm_fts_parser to parse a string

Today's newsletter has a nice article on sys.dm_fts_parser.  I didn't know it existed.  I need to get my head out of the shell more often!

New DMF for SQL Server 2008 sys.dm_fts_parser to parse a string
Many times we want to split a string into an array and get a list of each word separately. The sys.dm_fts_parser function will help us in these cases. More over, this function will also differentiate the noise words and exact match words. The sys.dm_fts_parser can be also very powerful for debugging purposes. It can help you check how the word breaker and stemmer works for a given input for Full Text Search [more]

Friday, November 19, 2010

Installing SQL 2008 on Windows 2008 R2

More and more I find myself needing to install SQL 2008 on Windows 2008 R2.   The problem is that there is a compatibility issue with Server 2008 R2 that requires jumping through hoops to install SQL Server 2008.  For me, the easiest solution is to build new patched installation media.  Just follow the instructions starting at
Procedure 2: Create a merged drop.

Article ID: 955392 - Last Review: October 30, 2009 - Revision: 5.4
How to update or slipstream an installation of SQL Server 2008

Monday, November 15, 2010

Removing the SQL Server Management Data Warehouse (or not!)

I have SQL Server Management Data Warehouses implemented on dozens of servers in various customer sites.  The data it collects is great, especially considering that it's free and easy to implement (initially).  But as with most free things, it has some drawbacks.  Most of all, sometimes it's not very reliable.  In some environments, I spent more time chasing down issues with the SSIS packages it runs than reviewing the data it collects.  Some of my clients have chosen to invest in commercial monitoring tools such as Quest Spotlight, Idera Diagnostic Manager, SQL Sentry, Redgate SQL Monitor or Confio Ignite.  Each of these tools has pluses and minuses, but overall they are all more reliable than the Management Data Warehouse. 

The net result is that I'm working to decommission Management Data Warehouses in various environments.  Unfortunately, I discovered that Microsoft never implemented a removal utility for the Management Data Warehouse.  Without manual intervention, the best you can do is disable it.  However, it leaves it's jobs behind, which in some cases had a last status of Failed.  This clutters up my reporting and it drives me batty.

Initially I cleared the job failures so that they wouldn't show up in any more reports.  Now I'm looking at how to remove the jobs and objects completely.   Sadly, Microsoft still hasn't published information about this.  DBAs are complaining, but apparently it isn't a big priority.

So, that leaves the manual steps of removing the fragments of the Management Data Warehouse from the server.  The removal workaround documented on the support site isn't supported, nor was it recommended by Microsoft.  Moreover, it references a table that doesn't exist.

I've decided to live with it wait and see what Microsoft puts out in a future service pack before attempting a crude hack that might make it impossible to fully remove the feature when Microsoft finally gets around to this issue.

Friday, October 1, 2010

Recovering a SQL Server database from a deleted log file

One of my customers called me today curious about what to do if you delete the LDF log file for a database.  To be honest, I hadn't experienced that one in the wild so I had to look it up.  Turns out the recovery process is pretty straight forward.  The server edition was SQL Server 2008 SP1.

Warning: this procedure is not the ideal way to resolve this problem (ref), but in my situation I had no other choice.  This customer's environment was a test environment and backups were disabled for this database.  Here's the rundown on what happened.  The log file ran out of space so he shut down SQL Server (commiting all pending transactions) and deleted the LDF file.  Since I was sure that the database file was in a good state all I needed was to create a new log file.  This is what I did, in your case, replace DBNAME with your database name:


Got this in return, which is expected (highlighted below) in this case.

File activation failure. The physical file name "S:\Log\DBNAME.ldf" may be incorrect.
The log cannot be rebuilt because there were open transactions/users when the database was shutdown, no checkpoint occurred to the database, or the database was read-only. This error could occur if the transaction log file was manually deleted or lost due to a hardware or environment failure.
Warning: The log for database 'Encompass' has been rebuilt. Transactional consistency has been lost. The RESTORE chain was broken, and the server no longer has context on the previous log files, so you will need to know what they were. You should run DBCC CHECKDB to validate physical consistency. The database has been put in dbo-only mode. When you are ready to make the database available for use, you will need to reset database options and delete any extra log files.

Then I ran CHECKDB again to be sure I that the database was consistent.

Afterward, all that was necessary is to bring the database online:

Thursday, September 30, 2010

I ran into a problem with a SQL 2005 installation today that I wanted to jot a note down about.  Microsoft's volume licensing website gives us three files for SQL 2005.  The key is not to extract them all into the same directory.  If you do, you'll end up with an installation tool that is missing the server components.

To avoid being completely confused, extract the files like this:



<root>\SQL 2005 ServicePack2

By the way, run Setup.EXE from the Server directory and don't forget to patch SQL with the latest Service  Pack!

Monday, September 27, 2010

SQL Server Build Numbers Reference

I always have a hard time finding these pages when I need them most so I thought I'd link them up.  You may also consider for a really nice unofficial build chart.  I wish Microsoft would maintain them in a similar (consolidated) format.

Update: after I originally posted this article, my friend Greg over at CodeTempest showed me his favorite, which may my own new favorite as well.  It looks better organized and more likely to be maintained.  

Official Microsoft Reference Pages

Build Numbers for SQL Server 2008 R2 (Pre-SP1)

Build Numbers for SQL Server 2008 SP1

Build Numbers for SQL Server 2008 (Pre-SP1)

Build Numbers for SQL Server 2005 SP3

Build Numbers for SQL Server 2005 SP2

I don't track anything older than this, because frankly, you should really consider upgrading.

Thursday, September 16, 2010

Waiting to Kill DTC

Today I was migrating database files for Biztalk to new storage.  I wrote a simple little automated move script which I'll share at a later date.  Everything was going peachy.  I asked the Biztalk administrator to shutdown the services before I began.  I noticed that there was a spid with pending transactions, but since he assured me that he was completely offline I decided to proceed.  Every database moved cleanly except for BizTalkMgmtDb.  This database was the one with the spid.  Since my script rolled back transactions I knew that SQL Server would be trying to rollback the spid's transaction already.  Nevertheless, with the Biztalk admin's permission I tried killing the spid.  It of course gave me the famous (at least to me):

SPID X: transaction rollback in progress. Estimated rollback completion: 0%. 

This was a production environment and I was inside of business hours; so thought that I was going to be stuck.  As I studied the problem, I could see that the SPID was clearly waiting on DTC:

SELECT lastwaittype FROM sys.sysprocesses WHERE req_spid = X


Then I remembered that the clustered DTC service on the SQL cluster crashed the previous evening. One of the infrastructure engineers resolved the issue and brought the DTC service back online before I could take a look. I tried restarting the DTC service, in case killing the connection to the SQL Server service would release the spid. Alas, it made no difference for me.  I searched around for a solution but found lots of folks advocating restarting SQL Server.  Since I knew better, I kept on researching. 

Eventually, I found some useful material about how the DTC works.  I realized that it must be the unit of work below the hood that was keeping the SPID from being rolled back.   I used the DTC Tester Tool to validate the DTC. I could tell that the DTC was no longer accepting connections:

DtcGetTransactionManager failed: 8004d01b DtcGetTransactionManager Call Error: Indicates unable to connect to the transaction manager, or the transaction manager is unavailable.

Finally, I was getting somewhere!  This gave me an idea, the spid must be hung because it' DTC transaction is orphaned.  If that's the case, then there should be no reason why I couldn't kill it.  All I needed was the unit of work id.  It turns out that the solution is pretty simple.  

SELECT req_transactionUOW FROM syslockinfo WHERE req_spid = X


The result will be a GUID-like string.  Use the result to kill each unit of work like this:

KILL 'AC6B59D7-A9C8-44DE-A650-CB63B8249D04'

That's it, after that the spid that I had previously killed finally died!  I hope this tidbit is helpful to someone else in a pinch.

Monday, September 13, 2010

Why you should use a password with base entropy + complexity

The premise is simple. By now you have an account registered in hundreds of places. Some companies such as Amazon, Google and Microsoft go through pains to be sure that your information is secure. However, occasionally, you find yourself registering with a small run-of-the-mill eTailer that can't afford an army of security people. Let's assume you registered using the same password that you use for your email and your bank.  Now all that a hacker needs to do is exploit the weak security of the eTailer to steal your password and email address. Then using those credentials, he/she logs in to your email account and uses the built-in search utility to find a time that you disclosed your bank account username. Maybe it was a password reset or maybe you emailed it to yourself for "safe-keeping".  In some cases, banks use your social for a username which is even more dangerous!

No matter, he/she now is armed with your bank account login ID, your email account and your favorite password. Oh wait, doesn't the bank is ask a question like "What was your first car?" or "Who do you most admire?" No problem, the hacker can login to Facebook with your email address and favorite password and look for one of those "100 things you didn't know about me" notes. While it's statistically unlikely to happen, password re-use makes it all too easy.  What's more concerning is that many cyber-criminals these days aren't very technical, they literally buy the hacking software they are going to use.  This means mass-production!

Here's an idea. Use a complex base password: something with numbers, letters and a special character or two. Say it's six digits like: i2!@bh.  You can remember that right?  It's only six digits!  Then complicate the rest of the password with something that reminds you of the site you're visiting. For example, 12!@bhamazonia for your Amazon account and 12!#bhgoogoo for your Google account. You get the idea. Just don't re-use the same password.

Good luck, and use safe-surfing practices!

Wednesday, August 18, 2010

T-SQL Formatting Number with Commas

I don't know about you, but I find reading very large numbers difficult without a commas.  Often, when I'm looking at performance statistics in SQL Server I want to see the output including commas for the sake of my feeble brain.  SQL Server most likely lacks this functionality natively because formatting really belongs on a presentation layer.  However, since most of my work is in SQL Management Studio, I choose to do the following:

CONVERT(varchar, CAST(<column name> AS money), 1)

I realize it's a simple, un-elegant and totally duct-tape programmer approach to solving this problem.  However, someone else might benefit from it as I do so I choose to share it.

If you want to save some typing, you can always create it as a function that might look like this:

Monday, August 16, 2010

SQL 2008 Cluster Installation Error "the volume name for drive could not be determined"

I ran across this rare gem today.  I didn't find any information about the specific error online so I thought I'd document it on my blog.  I was installing a new SQL Sever 2008 cluster. Right after the "Feature Selection" step the installer crashed with the following error: "The volume name for drive G:\ could not be determined."

It turned out that drive G:\ was mapped to my roaming user profile but it was also one of the drive letters chosen by the storage administrator who built the cluster. Since everyone's roaming profile will run into this problem I asked the storage admin to give me new drive letters. VoilĂ , problem solved.