Archive for the ‘Debugging’ Category

Save time when you are debugging by rebasing your DLLs

Monday, August 7th, 2006

If you are working with a process that has many of your DLLs loaded in it, and your DLLs tend to be loaded and unloaded dynamically, then you can sometimes save yourself a lot of trouble when debugging a problem by making sure that each of your DLLs has a unique base address.

That way, if you have a bug where one of your DLLs is called after unloaded, you can easily figure out which DLL the call was supposed to go to and which function it should have gone to by loading the DLL using the dump file loading support (start WinDbg as if you were going to debug a dump file, but select the .DLL in question instead of a .dmp file) and unassembling the address that was referenced.

On Windows Server 2003 and later, NTDLL maintains a list of the last few unloaded modules and their base addresses in user mode (accessible using “lm” in WinDbg) which can make debugging this kind of problem a bit more manageable even if you don’t rebase your DLLs, but rebasing is easy, improves loading performance (and especially scalability under Terminal Server), so I would highly recommend going the rebasing route anyway.

If you weren’t on Windows Server 2003 and didn’t rebase your DLL, then chances are the loader relocated it at load time to some not-predictable location, which makes finding the actual DLL being called and which function / global / etc in the DLL was being referenced when the crash occured much more difficult than if the DLL always loads at its preferred base address and you can simply look up the address in the DLL directly.

Using the symbol proxy in cross-domain scenarios with UncAccessFilter

Friday, August 4th, 2006

If you have a symbol proxy setup and you need to have it talk to a symbol server path that references a UNC share on a server that isn’t on the same domain as the IIS webserver hosting symproxy, then you may need to do some hackery to get the symbol proxy to work properly without prompting for credentials to use on the network.

 Specifically, the problem here is that there is no way to tell IIS to try and map a UNC share with a particular username and/or password before processing a request unless the request itself points to a network share.

One way to work around this is to use a simple ISAPI filter that I wrote (UncAccessFilter) to make sure any required UNC paths are mapped before the symproxy ISAPI filter is called.  After installing the ISAPI filter in the usual way, make sure that it is prioritized above the symproxy filter.  To configure it, you will need to manually set up some values in the registry.

 Create the key “HKEY_LOCAL_MACHINE\Software\Valhalla’s Legends\Skywing\UncAccessFilter” and ensure that the user web requests will be running in has read access to it.  You will probably want to ensure that only the web access user, administrators, and the system account have read access to this key because it will have passwords stored in it (be aware of this as a potential security risk if someone gets access to the registry key, as the passwords are not obfuscated in any way).  Then, for each share, create a REG_SZ value whose name is the share path you want to map (e.g. \\fileserver\fileshare) and whose contents are of the format “username;password”, for instance, “fileserver\symbolproxyuser;mypassword”.

To debug the filter, you can create a REG_DWORD value in that key named “DebugEnabled” and set it to 1, in which case the IIS worker process under which the ISAPI filter is running in will do some diagnostic OutputDebugString calls about what operations it is performing if you have a debugger attached to the process.  Assuming you configured the filter properly, on startup, you should see a series of messages listing the configured UNC shares (you may need to attach to the svchost process that creates the w3wp worker processes and use `.childdbg 1′ to catch this message for the new worker processes on startup).

If you are using the prebuilt binaries then make sure to install the VC++ 8 runtimes on the IIS server first.  Note that the prebuilt binaries are 32-bit only at this time, you’ll need to rebuild the ISAPI filter from source if you want to use the filter in 64-bit mode.

Be aware that the ISAPI filter is fairly simple and is not extraordinarily robust (and may be a bit slow if you have high traffic volumes, since it enumerates mapped network shares on every incoming request).  Additionally, be aware that if one of the servers referenced in the registry is down, it can make web requests that you have configured to be filtered by UncAccessFilter take a long time as the filter tries unsuccessfully to reconnect to the configured share on that server.  However, when properly configured, it should get the job done well enough in most circumstances.

Note that if you can get away with using the same account for all of your shares, a better solution is to simply change the account the web application associated with the symbol proxy is running under.  If you need to use multiple accounts, however, this doesn’t really do what you need.

Update: It would help if I had posted the download url.

Remote debugging review

Thursday, August 3rd, 2006

Over the past week or two, I’ve written about some of the various remote debugging options available to you through the Debugging Tools for Windows (DTW) package.  I’ve covered most of the major debugging mechanisms available at this point and given brief descriptions of their strengths and weaknesses.

Here’s an indexed listing of the posts on this topic so far:

  1. Overview of WinDbg remote debugging
  2. Remote debugging with remote.exe
  3. Remote debugging with KD and NTSD
  4. Remote debugging with -server and -remote
  5. Reverse debugging -server and -remote
  6. Securing -server and -remote remote debugging sessions
  7. Remote debugging with process servers (dbgsrv)
  8. Activating process servers and connecting to them
  9. Remote debugging with kdsrv.exe
  10. Remote debugging review

At this point, you should be able to use all of the above remoting mechanisms in their basic usage cases.  There are a couple of obscure features that I did not cover, such as doing -server/-remote over serial ports, but between my posts and the documentation you should be able to figure out what to do if you ever find a use for such estorica (let me know if you do!).  What remains to be told is some general advice on which remoting mechanism is the best for a particular problem.

In general, the most important factors in choosing a remoting mechanism are:

  • Available bandwidth and latency between your computer and the remote system.  Some remoting mechanisms, like dbgsrv, perform very poorly without a high bandwidth, low latency link.
  • Whether symbol access needs to be done on the client or the debugging target.  This consideration is important if you are debugging a problem on a customer site.
  • What types of targets you need to support.  Some mechanisms, such as process servers, do not support all target types (for instance, lack of dump file debugging support).
  • Whether you need special support for working through a firewall (i.e. reverse connection support).
  • Ease of use with respect to setting up the remoting session.

These are the general factors I use to decide which remoting mechanism to use.  For example, in ideal cases, such as debugging a problem on a LAN or on a virtual machine hosted on the same computer, I will almost always use a process server for remote debugging, simply because it lets me keep my own WinDbg workspace settings and symbol access without having to set up anything on the target computer.  Over the Internet, process servers are usually too slow, so I am often forced to fall back to -server/-remote style remoting.

Taking into account the guidelines I mentioned above, here are the major scenarios that I find useful for each particular remoting mechanism:

  • Process servers and smart clients (dbgsrv).  This is the remote debugging mechanism of choice for remotely debugging things on virtual machines, on a LAN or other fast connection, or even on the same computer (which can come in handy for certain Wow64 debugging scenarios, or cross-session debugging under Terminal Server prior to Windows XP).  Process server debugging is also useful for debugging early-start system services remotely, where the intrastructure to do symbol access (which touches many system components, for things like authentication support) is not yet available – for this scenario, you can use the “-cs command-line” parameter with dbgsrv to start a target process suspended when you launch dbgsrv, which is handy for using Image File Execution Options to have dbgsrv act as a debugger for early start services.  This can be more reliable than -server and -remote if you are trying to do symbol access, as if you are debugging certain services, you might deadlock the debugger and lose the debugging session if the debugger has to talk to the service you are debugging in order to complete a network symbol access request.
  • -server and -remote.  If I am doing debugging over the Internet, I’ll usually use this mechanism as it’s relatively quick even over lower quality connections.  This mechanism is also useful for collaborating with a debugging session (for instance, if you want to show someone how to perform a particular debugging task), as you can have multiple users connect to the same debugging session.  Additionally, -server/-remote are handy if you have a large dump file on a customer site and you want to debug it remotely instead of copying it to your computer, but would like to do so from the context of your local computer so that you have easier access to source code and/or documentation.  Finally, -server/-remote support remote kernel debugging where process servers do not.
  • KdSrv.exe.  If you need to do remote kernel debugging over a LAN, this is the mechanism of choice.  Be aware that kernel debugging is even more latency and bandwidth sensitive than process servers, making this mechanism useless unless you have a very fast, LAN-like connection to the target.  If these conditions hold true, KdSrv.exe provides the main benefits that a process server does for user mode debugging; local symbol access, local debugger extensions, and allowing the debugger to use your workspace settings on the local computer as opposed to setting up your UI on a remote system.
  • NTSD through KD.  This is useful in a couple of specialized scenarios, such as debugging very early start processes or performing user mode debugging in conjunction with the kernel debugger.  While controlling NTSD through KD is much less convenient than through a conventional remote debugging session, you won’t have to worry about your session going away or getting disconnected while the remote system is frozen in the kernel debugger.  In particular, this is useful for doing things like debugging things that make calls to the LSA from kernel mode, or other situations where kernel mode code you are debugging is extensively interacting with user mode code.
  • Remote.exe.  I have never really found a situation that justifies the use of this as a preferred remoting mechanism, as its capabilities are far eclipsed by the other remoting services available and the benefits (low network utilization) are relatively minimal compared to -server/-remote in today’s world of cable modem and xDSL connections.

If you are debugging a problem on a customer site, you will likely find reverse connection debugging highly useful.  All of the modern remote debugging mechanisms support reverse connections except NTSD over KD, for obvious reasons.

Another consideration to take into account when selecting which mechanism to use is that you can mix and match multiple remoting mechanisms within a debugging session if it makes sense to do so.  For instance, you can start a process server, connect to it with ntsd, and launch a -server/-remote style server with “.server” that you then connect to with WinDbg.  This capability is usually not terribly useful, but there are a couple of instances where it can come in handy.

That’s all for this series on remote debugging.  I may come back and revisit this topic again in the future, but for the moment, I’ll be focusing on some different subjects for upcoming posts.

Why you shouldn’t touch things in DllMain

Wednesday, August 2nd, 2006

One topic that comes up on the Microsoft newsgroups every once and awhile is whether it is really that bad to be doing complicated things in DllMain.

The answer I almost always give is yes, you should always stay away from that.

This is a particularly insidious topic, as many people do things in DllMain anyway, despite MSDN’s warnings to the contrary, see that it seems to work on their computer, and ship it in their product / program / whatever.  Unfortunately, this often ends up with hard to debug problems that only fail on a particular customer computer – the kind that you really don’t want to get stuck debugging remotely.  The reason for this is that many of the things that can go wrong in DllMain are environment specific.  This is because depending on whether a particular DLL that you are calling inside of DllMain when you break the rules is loaded your DLL was loaded or not will often make the difference.

If you dynamically load a DLL in DllMain and it has not already been loaded yet, you will get back a valid HMODULE, but in reality the initializer function for the new DLL will not be called until after your DllMain returns.  However, if the DLL had already been loaded by something else and your LoadLibrary call just incremented a reference count, then DllMain has already been called for the DLL.  Where this gets ugly is if you call a function that relies on some state setup by DllMain, but on your development/test boxes, the DLL in question had already been loaded for some reason.  If on a customer computer, you end up being the first to load the DLL, you’ll have mysterious corruption and/or crashes resulting from this which never repro in the lab for you.

So, stay away from complicated things in DllMain.  There are other reasons too, but this is the big one for current OS releases (of course, Vista and future versions may add other things that can go wrong if you break the rules).

If you are interested, Michael Grier has an excellent series on this topic to help you understand just what can go wrong in DllMain.

Remote debugging with kdsrv.exe

Monday, July 31st, 2006

Most of the debugging mechanisms I have gone through so far will also support kernel debugging, though I have not focused on this fact.  You can use remote.exe for controlling KD remotely, and -server/-remote for controlling one KD through another KD or WinDbg.  Both of these mechanisms can be used to control a kernel debugger remotely (keep in mind that you still need a separate computer to run kd.exe on from the target computer of course), however, they do not allow the same flexibility as dbgsrv.exe does.  This means no client-side symbol access, and no client-side debugger extensions.

However, there is a way to get this functionality with the kernel debugger as you would with the user mode debuggers when using dbgsrv.exe.  Enter kdsrv.exe, the kernel debugger server.  Kdsrv.exe is an analogue of dbgsrv.exe and fullfills the same basic functional requirements; it allows multiple debugger clients to connect to it and begin kernel debugging sessions on resources that are connected to the computer running kdsrv.exe.  Like dbgsrv.exe, kdsrv.exe is used with one debugger client per debugging session, and also like dbgsrv.exe, kdsrv.exe does not start any debugging sessions on its own and leaves that up to clients that connect remotely.  It also allows for secured connections and reverse connections, just like dbgsrv.exe (using the same connection string values).

Kdsrv.exe allows the same rich experience as dbgsrv.exe when it comes to doing remote kernel debugging.  It allows you do perform symbol access and debugger extension calls on the local debugger client and not from a kd.exe instance running on the remote system.  It also has many of the same limitations of dbgsrv.exe, such as no support for remote dump file debugging.

To activate a kdsrv.exe server, use the same syntax that I described with with “Activating process servers and connecting to them“.  The command options are identical to dbgsrv.exe with respect to specifying a connection string and starting the server (some of the little-used other command line options to dbgsrv.exe that relate to starting a process along with the debugger server are not supported by kdsrv.exe).  For example, you could use:

kdsrv.exe -t tcp:port=port,password=secret

You’ll get an error message box if you give kdsrv.exe an unacceptable command line, otherwise, it will simply run in the background.

Connecting to a kdsrv.exe instance uses a slightly more complex connection string syntax, which is an adaptation of the one used by smart clients with -premote.

The client connection string is given in the format:

kdsrv:server=@{tcp:port=port,server=server-ip,password=password},trans=@{kd-string}

(Password is optional.)

The “kd-string” value is what you would normally pass to kd.exe or windbg.exe to start a kd session.  It specifies a remote resource to connect to that resides on the machine running kdsrv.exe.  For instance, you might use “com:port=comport,baudrate=baudrate” to direct kdsrv.exe to connect the kernel debugger over a com port connection using the specified baud rate.

To activate a kdsrv.exe client, use a command line formatted as follows;

debugger -k connection-string

, where “connection-string” is the “kdsrv:…” string discussed above.  Here are some examples of starting a server and connecting to it:

Starting the kdsrv instance:

kdsrv.exe -t tcp:port=1234,password=secret

Connecting kd:

kd.exe -k kdsrv:server=@{tcp:port=1234,server=127.0.0.1,
password=secret},trans=@{com:port=com1,baudrate=115200}

After that, you should be set to go.

You can use a variety of different underlying debugger targets with kdsrv.exe, including serial (com), 1394, and serial-over-named pipe (virtual machine) targets.

Activating process servers and connecting to them

Friday, July 28th, 2006

The mechanisms used to activate a process server are fairly similar to those used to work with the -server and -remote remoting mechanism.

To start a process server, you must use a special utility called dbgsrv.exe that is distributed with DTW.  This program is the server end of debugger connection.

Like -server and -remote, process server / smart client debugging uses connection strings for both the client and the server.  The syntax for client and server connection strings are for the most part compatible, so you should look back on my existing posts about basic remote connectivity, reverse connections, and securing debugger connections.  For the most part, things work exactly the same as -server and -remote; all of the connectivity options supported by -server and -remote work for process servers and smart clients, and all of the features (such as reverse connections and secured connections) are available using the same mechanisms as well.

The main differences are how you pass the connection string to dbgsrv.  Where you would previously use “-server connection-string“, you will use “-t connection-string” with dbgsrv.  For instance, an example command line would be:

dbgsrv.exe -t tcp:port=port,password=password

To connect to this process server, you can use the “-premote” parameter with a debugger client (NTSD, CDB, and WinDbg).  This parameter functions in the same way as “-remote”, except that it is for connecting to process servers and not -server-style remote debugging servers.

debugger -premote tcp:port=port,server=server,password=password

Note that if you are using NTSD or CDB, you will need to specify a pid to connect to (e.g. -p pid) on the command line as well.  This is because a process server is not just fixed on one target process, but allows you to debug any process it has permissions to debug on the remote system.  With WinDbg, you can just open the process list (e.g. F6) like you would with a local system, and you will be presented with a list of running process to debug on the remote system.

After the connection is established, you can continue to debug as if you were running the debugger directly on the remote system.  Remember that with process servers, things like symbol access and extension dll calls are actually performed by the client debugger and not the remote system.  This means that if you set the symbol path, you are setting the symbol path for your debugger and not the process server (which has no concept of a symbol path).  As a result, your system will be the one to access symbol repositories and not the remote system.

The important thing to remember about process servers is that unlike the other remote debugging mechanisms that I have discussed thus far, process servers provide you with an entire “view” of the remote system and not just a remote view of a single debugging session.  This is in effect widening the scope of the remote debugging session from just one target to any programs running on the target system.

Remote debugging with process servers (dbgsrv)

Thursday, July 27th, 2006

For the last few entries, I have been discussing the -server and -remote debugging mechanism.  While this remoting mechanism is good for a number of scenarios, in some cases, you want the debugger client to do the “heavy lifting” (such as managing symbols).  This does require significantly more bandwidth and is latency sensitive, but can provide advantages in several cases, such as if you are debugging a problem at a customer site live and need to access symbols, but for security reasons you can’t grant the debugger running on the customer site direct access to your internal symbol store.

The solution is to use what is called a “process server” (dbgsrv.exe) and a “smart client” (“-premote” command line parameter for the DTW debuggers).  This remoting mechanism starts a small stub server process (dbgsrv.exe) on the target computer which can accept one or more connections from debugger clients.  Once debugger clients connect, they can see a list of processes (e.g. F6 in WinDbg) on the target computer and select a process to debug.

The process server mechanism is very useful in the scenario where you need to do symbol access on your client.  It is also useful to do cross-session debugging under Terminal Server in Windows 2000, as native cross-session debugging is not supported under Windows 2000.  For that to work, you must start dbgsrv.exe under the session you want to debug processes in, and then connect to it using a DTW debugger running in your session.

Process servers are, however, much more bandwidth and latency sensitive than -server/-remote.  As a result, I would not recommend using them on a bandwidth- or latency- constrained network link.  Keep this in mind when choosing whether to use them to debug a problem on a customer site.

Also, process servers are restricted to just live user mode processes, and cannot be used to debug dump files or perform kernel debugging (like the other remoting mechanisms allow).  Additionally, you cannot have multiple remote debugger clients working on the same process like you could with -server/-remote.

Although there are some downsides (bandwidth usage and latency-sensitivity), process servers give the richest remote debugging experience for user mode processes of all of the mechanisms I have debugged thus far (provided your network connection can support this method).  Besides the ability to do local symbol access, process servers let you run custom extension DLLs locally without having to ship them to a customer site (important if you have custom extensions that you don’t necessarily want to become public, but are nonetheless useful for troubleshooting customer problems).  When you are using a process server, the full capabilities of the debugger should be available to you as if you were sitting at the remote machine and not just remote controlling a debugger, including the ability to stop debugging one remote process and switch to another just like you would with a local process.

Like -server and -remote, process servers operate through the use of a server connection string and a client connection string.  For the most part, these connection strings follow a syntax very similar to -server and -remote.  In the next installment of this series, I’ll go into detail about what you need to do in order to setup a successfull process server / smart client remote debugging session.

Securing -server and -remote remote debugging sessions

Wednesday, July 26th, 2006

Previously, I’ve discussed the basics of -server and -remote, and how to do reverse debugging connections.  This covers most of the interesting functionality for this remote debugging mechanism, with one big exception: securing your remote debugger connection when you are operating over an untrusted network (such as the Internet).

Used as-is, none of the remote debugging techniques I have discussed so far provide any real security, other than the inherent difficulty of hijacking a TCP connection.  Data is sent in plaintext and there is no authentication of potentially sensitive commands that could direct a debugger to do dangerous operations and even potentially take control of the target’s computer if a privileged process is being remotely debugged.

If you are using the NTSD over KD remoting technique, this is usually less of a concern, since you are usually operating over a serial or 1394 cable physically connecting the two computers.  For plain -server and -remote debugging, or remote.exe debugging, however, this problem is more serious as these methods are typically used over networks.

For remote.exe, there is not all that much that you can do to secure connections.  The -server and -remote mechanism has a happier tale to tell, however.  This remoting mechanism does in fact have built-in support for secure connections using both SSL over the TCP transport or SSL over the named pipe transport.  I’ll primarily cover SSL over TCP, but the general concepts apply to the SSL over named pipe (“SPIPE”) transport as well.  There is additional support for simple password authentication, for all transports, which I’ll discuss as well.  However, this only adds a very basic form of protection and usually buys you only minimally more security than an unpassworded connection.

To use password authenticated connections, simply append the “password=password” parameter to the connection string for both the client and the server.  For instance, “tcp:port=1234,server=127.0.0.1” becomes “tcp:port=1234,server=127.0.0.1,password=secret”.  That’s all there is to this mechanism; after using it, both ends must specify the same password or the connection will fail.  I should emphasize that this again does not really provide strong protection, and the SSL alternatives should usually be used instead.

The SSL options are slightly more complicated.  These require a certificate that supports server authentication and is known to both ends of the remote connection.  The certificate that you use needs to have the “server authentication” role enabled for it.  Additionally, note that the same certificate is needed by both parties; that is, both parties must have the private key for the certificate.  This makes using the SSL option unfortunately more cumbersome than it could be.  To get this to work, you will typicallly have to request a server authentication certificate from your domain CA, and then install it (with the private key) on both computers.  Then, you can use it with SSL or SPIPE remote debugging.

The general format of an SSL transport connection string is very similar to the TCP transport, with some additional options added.  For the client, it is in the format of “ssl:port=port,proto=ssl-protocol,server=server-ip,[certuser|machuser]=cert-name-or-thumbprint“.  For a server, use “ssl:port=port,proto=ssl-protocol,[certuser|machuser]=cert-name-or-thumbprint“.  Remember that the certificate must match for both parties.

The “proto” parameter specifies which dialect of SSL to use, and can be one of tls1, pct1, ssl2, or ssl3.  The protocol must be the same for both the client and the server.

The “certuser” parameter specifies either the name (e.g. “Subjectname”) or thumbprint (e.g. “12345689abcdef…”) of a certifcate in the user certificate store.  Alternative, you can use “machuser” instead of “certuser” to specify that the certificate is contained within the machine store (using “machuser” typically requires that the debugger be run with administrator or localsystem privileges).  To determine the subject name or thumbprint of a certificate, run mmc.exe, add the “Certificates” snapin for either your user or the computer user (if you are an administrator and want to use “machuser”), locate the certificate you want to use (typically under “Personal\Certificates”), open the certificate property sheet, and view the “Details” tab.  The common name (“CN = “) under “Subject” is the subject name you will use, and the hex string under “Thumbprint” is the thumbprint you will use (you only need one).  If you are using the thumbprint, then remove all spaces between the hex digits when providing the parameter to “certuser” or “machuser”.

Putting all of these together, once you get the certificate in place on both computers, activating an SSL remote debugging session is basically the same as with a TCP remote debugging session.  To start an SSL debugging server, you might do this:

debugger -server ssl:port=1234,proto=tls1,
machcert=1111111111111111111111111111111111111111

Likewise, to start the client, you would use something like this:

debugger -remote ssl:port=1234,server=127.0.0.1,proto=tls1,
machcert=1111111111111111111111111111111111111111

Afterwards, the debugging connection should operate as any other -server/-remote session would.  All of the usual other considerations for -server/-remote apply to SSL as well; for instance, you can use reverse debugging and you can use “.server” instead of “-server”.

Because of the difficulty in setting up the certificate for SSL debugging, I usually recommend using something else to secure the remote debugging session other than the built-in SSL support, such as a VPN.

Next time: Debugger process servers and smart clients.

Reverse debugging -server and -remote

Tuesday, July 25th, 2006

Last time, I provided a basic overview of some of the options available for remote debugging using -server and -remote.  There are still a couple of interesting things to consider about this particular debugging facility which I did not mention last time, though.

One of the more important extra features of -server and -remote is the ability to do reverse connections when using the TCP transport.  You are probably already familiar with this concept if you have used VNC (or similar tools) to assist with resolving a problem at a customer site.  Reverse connections allow you to remotely debug a computer that is firewalled off, provided you have an open port for your computer.  To perform reverse connection debugging, there are a couple of changes that need to be made to how I previously talked about using -server and -remote.

First, you should start the debugger client before the debugger server.  This is because the debugger client will be the program that actually performs a listen call on a socket, not the debugger server.

To start a reverse connection debugger client, use the connection string “tcp:port=port,clicon=0.0.0.0″.  (The IP address you supply to “clicon” is apparently ignored, and the debugger always listens on the wildcard address).  For example, you could use:

debugger -remote tcp:port=1234,clicon=0.0.0.0

Additionally, if you are using WinDbg, you can use the Ctrl-R / Connect to Remote Debugger Session UI to accomplish this task if you are not already in a debugging session.

After starting the reverse connection client, it will appear to be frozen.  In the case of WinDbg, the UI will appear to be unresponsive; this is the normal and expected behavior and not indicative of a problem!

After the reverse connection debugger is started, then the next step is to start the debugger server on the target computer.  Use a connection string in the form of “tcp:port=port,clicon=client-ip-address“.  For example:

debugger -server tcp:port=1234,clicon=127.0.0.1

After you start the debugger server, it will attempt to connect out to client debugger.  If all goes well, then you should be able to interact with the remote target the same as if you were using conventional -server and -remote.

As with conventional -server and -remote, you can use the “.server” command instead of the “-server” command line parameter to start a debugger session on an already active debugger target.

Next time: Additional connection string options that you can use for securing your -server/-remote remote debugger connections on an untrusted network.

Remote debugging with -server and -remote

Monday, July 24th, 2006

Moving onwards to the more modern remote debugging services available, the next option available for remote debugging with the DTW package is -server/-remote.   This remote debugging mechanism is one that you may find yourself using fairly frequently.  Its advantages are rich functionality and integration with the WinDbg GUI (as a remote client), and reasonably low bandwidth usage, though not quite as lightweight as the previous options.

This mechanism allows you to connect one (or more) debugger clients to a debugger server.  This can allow for a limited set of collaboration between multiple people debugging the same problem.  Unlike the other mechanism discussed thus far, -server/-remote utilizes a more advanced protocol that while leaving most of the “hard work” (including symbol management) to the debugger server, allows things like the various WinDbg debuggee status windows to work and receive useful information (i.e. not just command window text).  So, when using this protocol with WinDbg as the client, you can utilize the memory / disassembly / register windows (and soforth).  Note that you can use any of WinDbg/ntsd/cdb/kd as either a client or server with this protocol.  It can be used for any debugging type that is supported by these debuggers, including dump file debugging.

The protocol also provides for varying forms of security.  You can use simple plain text password authentication, or if you want true security, SSL over named pipes or TCP.  The protocol can operate over several underlying transports; tcp, com (serial port), 1394, or named pipes (note that this is the transport used to communicate between the debugger server and debugger client, not related to the medium that is used to connect to the target in kd).

When using -server/-remote, it is important to ensure that both the debug client and debug server are from the same DTW package; otherwise, unexpected results may occur (typically resulting in the connection failing silently). 

There are many different ways to activate this remote debugging mechanism; you can start a debugger with -server to create a debugger server, or with -remote to act as a debugger client.  Alternatively, you can use the .server command to create a debugger server out of an existing debugging session.  If you are using WinDbg, you can use File->Connect to Remote Session (Shortcut: Ctrl-R) to connect to a remote debugging server instead of using the -remote command line parameter.

To use this debugging mechanism, you will need to create a connection string to be used by the debugger server and then the debugger client.  The general format of a connection string is “transport:transport-parameters”.  For our examples, I will use tcp as the transport; this is likely to be the most common you’ll use in the real world.  You can look up the usage of the other transports in the documentation if you are curious.

A simple connection string to use if you want to create a tcp server that listens on port 12345 would be “tcp:port=12345”.  The client connection string corresponding to this would be “tcp:port=12345,server=ip_or_hostname_of_server”.  (Note that if you are using the DTW 6.6.3.5 package, then it is required that you specify port= before server= in the client connection string or the connection will fail, due to a parser bug in the debugger).  If you use the “.server tcp:port=12345” command, you’ll see something like this:

0:001> .server tcp:port=12345
Server started.  Client can connect with any of these command lines
0: -remote tcp:Port=12345,Server=COMPUTERNAME

Then, you can connect to it using WinDbg and specifying either “tcp:port=12345,server=localhost” (assuming you are running on the same computer) to either the Ctrl-R/Connect to Remote Session dialog or the -remote parameter:

Microsoft (R) Windows Debugger  Version 6.6.0007.5
Copyright (c) Microsoft Corporation. All rights reserved.

Server started.  Client can connect with any of these command lines
0: -remote tcp:Port=12345,Server=COMPUTERNAME
COMPUTERNAME\User (tcp 127.0.0.1:2025) connected at Mon Jul 24 01:33:20 2006

On the server, you’ll see a notice that someone has connected:

COMPUTERNAME\User (tcp 127.0.0.1:2025) connected at Mon Jul 24 01:33:20 2006

Henceforth, all output will be sent to all clients and the local server text output.  Additionally, clients will be able to use the debuggee status UI windows (e.g. disassembly) as well as viewing command output text.  After all this is setup, then you can continue to debug as if you were on the server computer (keep in mind, however, that symbol paths are relative to the server and not the client computer).  If you connect multiple clients to the same session, then each additional client will be updated as any debugger client changes the state of the debugging session.

That’s a quick overview of -server and -remote.  However, there’s a bit more to this particular remote debugger mechanism than what I have talked about so far.  I’ll elaborate on some other features and usage cases of -server and -remote in the next post in this series.