ActiveDirectory

You are currently browsing articles tagged ActiveDirectory.

I had a reader contact me with a question on using Kerberos and LDAP for authentication into Active Directory, based on Active Directory integration work I did many years ago. I was unable to help him, but he did find the solution to the problem, and I wanted to share it here in case it might help others.

The issue was that he was experiencing a problem using native Kerberos authentication against Active Directory with SSH. Specifically, when he tried open an SSH session to another system from a user account that had a Kerberos Ticket Granting Ticket (TGT), the remote system dropped the connection with a “connection closed” error message. (The expected behavior should have been to authenticate the user automatically using the TGT.) However, when he stopped the SSH daemon and then ran it manually as root, the Kerberos authentication worked.

It’s been a number of years since I dealt with this sort of integration, so I wasn’t really sure where to start, to be honest, and I relayed this to the reader.

Fortunately, the reader contacted me a few days later with the solution. As it turns out, the problem was with SELinux. Apparently, by copying the keytab file from a Windows KDC (an Active Directory domain controller), the keytab is considered “foreign” because it doesn’t have the right security context. The fix, as my reader discovered, is to use the restorecon command to reset the security context on the Kerberos files, like this (the last command may not be necessary):

restorecon /etc/krb5.conf
restorecon /etc/krb5.keytab
restorecon /root/.k5login

Once the security context had been reset, the Kerberos authentication via SSH worked as expected. Thanks Tomas!

Tags: , , , ,

This session describes NetApp’s MultiStore functionality. MultiStore is the name given to NetApp’s functionality for secure logical partitioning of network and storage resources. The presenters for the session are Roger Weeks, TME with NetApp, and Scott Gelb with Insight Investments.

When using MultiStore, the basic building block is the vFiler. A vFiler is a logical construct within Data ONTAP that contains a lightweight instance of the Data ONTAP multi-protocol server. vFilers provide the ability to securely partition both storage resources and network resources. Storage resources are partitioned at either the FlexVol or Qtree level; it’s recommended to use FlexVols instead of Qtrees. (The presenters did not provide any further information beyond that recommendation. Do any readers have more information?) On the network side, the resources that can be logically partitioned are IP addresses, VLANs, VIFs, and IPspaces (logical routing tables).

Some reasons to use vFilers would include storage consolidation, seamless data migration, simple disaster recovery, or better workload management. MultiStore integrates with SnapMirror to provide some of the functionality needed for some of these use cases.

MultiStore uses vFiler0 to denote the physical hardware, and vFiler0 “owns” all the physical storage resources. You can create up to 64 vFiler instances, and active/active clustered configurations can support up to 130 vFiler instances (128 vFilers plus 2 vFiler0 instances) during a takeover scenario.

Each vFiler stores its configuration in a separate FlexVol (it’s own root vol, if you will). All the major protocols are supported within a vFiler context: NFS, CIFS, iSCSI, HTTP, and NDMP. Fibre Channel is not supported; you can only use Fibre Channel with vFiler0. This is due to the lack of NPIV support within Data ONTAP 7. (It’s theoretically possible, then, that if/when NetApp adds NPIV support to Data ONTAP that Fibre Channel would be supported within vFiler instances.)

Although it is possible to move resources between vFiler0 and a separate vFiler instance, doing so may impact client connections.

Managing vFilers appears to be the current weak spot; you can manage vFiler instances using the Data ONTAP CLI, but vFiler instances don’t have an interactive shell. Therefore, you have to direct commands to vFiler instances via SSH or RSH or using the vFiler context in vFiler0. You access the vFiler context by prepending the “vfiler” keyword to the commands at the CLI in vFiler0. Operations Manager 3.7 and Provisioning Manager can manage vFiler instances; FilerView can start, stop, or delete individual vFiler instances but cannot direct commands to an individual vFiler. If you need to manage CIFS on a vFiler instance, you can use the Computer Management MMC console to connect remotely to that vFiler instance to manage shares and share permissions, just as you can with vFiler0 (assuming CIFS is running within the vFiler, of course).

IPspaces are a logical routing construct that allow each vFiler to have its own routing table. For example, you may have a DMZ vFiler and an internal vFiler, each with their own, separate routing table. Up to 101 IPspaces are supported per controller. You can’t delete the default IPspace, as it’s the routing table for vFiler0. It is recommended to use VLANs and/or VIFs with IPspaces as a best practice.

One of the real advantages of using MultiStore and vFilers is the data migration and disaster recovery functionality that it enables when used in conjunction with SnapMirror. There are two sides to this:

  • “vfiler migrate” allows you to move an entire vFiler instance, including all data and configuration, from one physical storage system to another physical storage system. You can keep the same IP address or change the IP address. All other network identification remains the same: NetBIOS name, host name, etc., so the vFiler should look exactly the same across the network after the migration as it did before the migration.
  • “vfiler dr” is similar to “vfiler migrate” but uses SnapMirror to keep the source and target vFiler instances in sync with each other.

It makes sense, but you can’t use “vfiler dr” or “vfiler migrate” on vFiler0 (the physical storage system). My own personal thought regarding “vfiler dr”: what would this look like in a VMware environment using NFS? There could be some interesting possibilities there.

With regard to security, a Matasano security audit was performed and the results showed that there were no vulnerabilities that would allow “data leakage” between vFiler instances. This means that it’s OK to run a DMZ vFiler and an internal vFiler on the same physical system; the separation is strong enough.

Other points of interest:

  • Each vFiler adds about 400K of system memory, so keep that in mind when creating additional vFiler instances.
  • You can’t put more load on a MultiStore-enabled system than a non-MultiStore-enabled system. The ability to create logical vFilers doesn’t mean the physical storage system can suddenly handle more IOPS or more capacity.
  • You can use FlexShare on a MultiStore-enabled system to adjust priorities for the FlexVols assigned to various vFiler instances.
  • As of Data ONTAP 7.2, SnapMirror relationships created in a vFiler context are preserved during a “vfiler migrate” or “vfiler dr” operation.
  • More enhancements are planned for Data ONTAP 7.3, including deduplication support, SnapDrive 5.0 or higher support for iSCSI with vFiler instances, SnapVault additions, and SnapLock support.

Some of the potential use cases for MultiStore include file services consolidation (allows you to preserve file server identification onto separate vFiler instances), data migration, and disaster recovery. You might also use MultiStore if you needed support for multiple Active Directory domains with CIFS.

UPDATE: Apparently, my recollection of the presenters’ information was incorrect, and FTP is not a protocol supported with vFilers. I’ve updated the article accordingly.

Tags: , , , , , , , , , , , ,

Today HyTrust launched its flagship product, the HyTrust Appliance, a security solution that is designed to centralize the control, management, and visibility for virtualized environments, in particular VMware Infrastructure environments. The HyTrust appliance achieves this through a number of key features:

  • The HyTrust Appliance provides integration with Active Directory or other LDAP-based directory services to enable centralized authentication. This allows organizations to leverage existing directory services for authentication, both for access via the VI Client or via SSH to the Service Console.
  • The HyTrust Appliance enables role-based access controls. These role-based access controls are defined in the appliance and permit organizations to control commands run in the Service Console as well as operations performed via the VI Client and vCenter Server.
  • The HyTrust Appliance provide secure logging and auditing functionality for all actions. Again, this logging occurs for every command and every action that is taken via any access method.

Since all traffic runs through the HyTrust Appliance, the solution has complete visibility and thus complete control over the traffic moving to or from the VMware ESX hosts. A number of different configurations are available for inserting the HyTrust Appliance into the flow of traffic, including using a different VLAN for ESX management traffic as well as a proxied configuration. The HyTrust Appliance can also ensure that the hosts it is protecting are configured to only accept traffic from the HyTrust Appliance itself, thus further ensuring that all access and actions are seen, controlled, and recorded.

The HyTrust Appliance will be available as both a hardware appliance as well as a virtual appliance. HyTrust also plans to make available a Community Edition at no charge; the Community Edition will support up to 3 VMware ESX hosts.

For more information, visit the HyTrust web site.

Tags: , , , ,

A reader contacted me a short while ago to inquire about a problem he was having with his Linux-AD integration efforts. It seems he had recently added a new domain controller (DC) that was intended to be a DC for a disaster recovery (DR) site. When he took this new DR DC offline in order to physically move it to the DR site, some of his AD-integrated Linux systems started failing to authenticate. More specifically, Kerberos continued to work, but LDAP lookups failed. When the reader would bring the DR DC back online, those systems started working again.

There was a clear correlation between the DR DC and the AD-integrated Linux systems, even though the /etc/ldap.conf file specifically pointed to another DC by IP address. There was no reference whatsoever, by IP address or host name, to the DR DC. Yet, every time the DR DC was taken offline, the behavior returned on a subset of Linux hosts. The only difference we could find between the affected and unaffected hosts was that the affected hosts were not on the same VLAN as the production domain controllers.

I theorized that Windows’ netmask ordering feature, which prioritizes the return of DNS lookups to provide clients with addresses that are “closer” to them, was playing a role here. However, the /etc/ldap.conf was using IP addresses, not the domain name or even the fully qualified domain name of a DC. It couldn’t be DNS, at least not as far as I could tell.

Upon further investigation, the reader discovered that the affected Linux servers—those that were on a different VLAN than both the production DCs as well as the DR DC—were maintaining persistent connections to the DR DC. (He found this via netstat.) When the DR DC went offline, the affected Linux hosts tried to continue to communicate to that DC and that DC only. Once the reader was able to get the affected Linux hosts to drop that persistent connection, he was able to take the DR DC offline and the Linux hosts worked as expected.

So, the real question now becomes: how (or why) did the Linux servers connect to the DR DC instead of the production DC for which they were configured? I think that Active Directory issued an LDAP referral to direct the affected Linux servers to the DR DC as a result of site topology. Perhaps due to an incorrect or incomplete site topology configuration, Active Directory believed the DR DC should handle the VLANs where the affected Linux servers resided. If that is indeed the case, the fix would be to make sure that your AD site topology is correct and that subnets are appropriately associated with sites. Of course, this is just a theory.

Has anyone else seen an issue similar to this? What fix were you able to implement in order to correct it?

Tags: , , , , , ,

I was visiting Unclutterer and saw them sharing older content from the site in a similar fashion. So, I thought I might try it here. Enjoy some of these “blasts from the past”!

One Year Ago on blog.scottlowe.org

LACP with Cisco Switches and NetApp VIFs
Hyper-V Architectural Issue
Latest VDI Article Published

Two Years Ago on blog.scottlowe.org

Bookmark Spam?
Personal Computing as a Collection of VMs?
Application Agnosticism

Three Years Ago on blog.scottlowe.org

Mac OS X and .local Domains
WMF Flaw Exploit Grows Worse
Complete Linux-AD Authentication Details

Tags: , , , , , , ,

Virtualization Short Take #24

There’s lots of good information flowing around the Internet, and it’s becoming increasingly difficult to sort through all the useless stuff to find the valuable gems. Hopefully, some of the links that I have collected here will prove to be more useful than useless!

  • I came across this VMware KB article titled “Dedicating specific NICs to portgroups while maintaining NIC teaming and failover for the vSwitch”. I was hoping it would shed new light on some NIC teaming functionality. Unfortunately, it was only about overriding the default vSwitch failover policy on a per-portgroup basis. I was already well aware of that functionality and use it quite extensively in my VMware designs, but for others that may prove useful.
  • This video about VMware DPM sparked some debate about spin-up/spin-down affecting drive MTBF and decreasing a VMware ESX server’s operational lifecycle. Chad Sakac of EMC shared some findings from EMC regarding spin-up/spin-down in this post and came to the conclusion that using VMware DPM should not materially affect the reliability or lifetime of servers (at least with regards to drive failures). Personally, I tend to agree that this was FUD, most likely from a competitor, but it’s best to get this sort of thing out in the open and debunked.
  • Leo posted a brief snippet of code to upgrade the VMware Tools on VMs without a reboot. It looks like it might come in handy. And Leo’s guide to configuring jumbo frames with an EMC AX4-5i is quite useful, too—it’s a nice counterpoint to my own guide to configuring jumbo frames.
  • Tomas ten Dam has completed his guide to building a complete “SRM in a Box” setup using the NetApp Data ONTAP Simulator. Of course, Chad wants him to use the Celerra VM…
  • Oh, and while we’re talking VMware SRM, be sure to check out Mike Laverick’s book on VMware SRM, “Administering VMware Site Recovery Manager 1.0″. I haven’t read the book yet, but knowing Mike I’m sure it’s good quality stuff. Maybe Santa will give me a copy for Christmas.
  • Sven H. over at VirtualFuture.info posted a good guide on using thin provisioned VMDKs with VMware ESX 3.5 via the vmkfstools command. (I was going to include a trackback to Sven’s post, but his blog theme doesn’t show the trackback URL.) Seems like I saw somewhere that thin provisioned VMDKs in ESX 3.5 are still unsupported, so deploy accordingly.
  • Via Tony Soper, I found that version 2 of Microsoft’s Offline Virtual Machine Servicing Tool is available. I first discussed the Offline Virtual Machine Servicing Tool back in June during Tech-Ed 2008. You can download the tool here.
  • Also from Tony, here’s a great article on how to balance VM I/O with Hyper-V. An interesting tidbit from this: by default, I/O balancing is enabled for storage, but not for networking. I can see it needing to be enabled for storage, but why disabled by default for networking?
  • More information on controlling resource utilization within Hyper-V is provided in this article by Robert Larson. It’s worth having a quick look if you are unsure how to configure it or how it works.
  • Ben Armstrong answers the question, “Why does it take so long to create a fixed size virtual hard disk?” The answer: the disk space is zeroed out in advance. My question is this: is this need to zero out the disk space a result of how NTFS deletes files or is this scenario applicable to VMFS as well?
  • This has probably been mentioned before, but users considering virtualizing their Active Directory domain controllers should keep these considerations in mind.
  • I recently ran into a situation where we need to change the IP address of an NFS datastore. (It’s a long story as to how this came about.) In any case, I told the customer that I couldn’t be sure that changing the IP address wouldn’t cause problems. Fortunately, before the customer tried it, I found this post by Rick Scherer. The short story: it doesn’t work, and you shouldn’t do it. Create a new datastore with the correct IP address and use Storage VMotion instead.
  • For even more information on Storage VMotion, also check out Chad’s post here.
  • VMwarewolf continues his Resolution Path series with common fault issues in VMware Infrastructure. Good stuff.

It’s clearly been too long since I published one of these, as I still have other links collecting dust in my “link bin”:

Third Brigade offers free security for up to 100 virtual machines
Version 4 of the PowerVDI tool
Go Daddy Wildcard Certificate with VI3
New VMware VI network port diagram request for comments
Auditing ESX root logins with email…

Like I said, there’s just so much information! And now that I’m trying to delve deeper into the storage realm, that’s only doubled up on the information I’m trying to manage. Hopefully I’ve picked out a few gems for you this week. Thanks for reading!

Tags: , , , , , , , , ,

I’m sorry, folks, but I’m not going to have the time or the resources to publish an update to my existing instructions for integrating Solaris 10 into Active Directory. Quite some time ago I had posted that I planned on creating an update to the original instructions so as to incorporate some lessons learned, but it keeps get pushed aside for other tasks that are more important and more relevant to my day-to-day work. Rather than keep readers hanging on for something that will likely never appear, I’d rather just be upfront and frank about the situation. As much as I’d love to spend some time working on the Solaris-AD integration situation and documenting my findings, I just don’t have the time. Sorry.

Tags: , , , , , ,

I ran across this handy white paper about OpenSSH on Linux using Kerberos authentication with Windows and Active Directory. There’s not a whole lot in there that isn’t also covered in my Active Directory integration notes, but it is useful information nevertheless.

Tags: , , , ,

I just wanted to provide a quick update on some articles I have in the works to be (hopefully) published soon.

  • I’m working on an article discussing when to use various NIC teaming configurations with VMware ESX. There are some significant repercussions here for a variety of network configurations, but especially so for configurations involving IP-based storage (iSCSI or NFS).
  • I’m finally wrapping up an article on the Xsigo I/O Director. I’ve been working a Xsigo VP780 in the lab for quite some time, and this article will provide a brief overview along with some tips and tricks.
  • I received word from HP that I should be getting a ProCurve switch in my lab soon, so that means I can provide a ProCurve-oriented version of this NIC teaming and VLAN trunking article.
  • I have some notes on using NetApp Open Systems SnapVault (OSSV) in conjunction with VMware ESX that I plan to post here as well.

New versions of the Linux and Solaris AD integration articles are on the way as well, starting with an update of the Solaris instructions to accommodate Solaris 10 Update 5 and Windows Server 2008.

If there’s anything else you’re interested in seeing, let me know in the comments. Thanks for reading!

UPDATE: The NIC utilization article is available here.

Tags: , , , , , , , ,

In UNIX/Linux integration scenarios, it’s useful to know which accounts have been UNIX-enabled, i.e., have had the UID number, NIS domain, login shell, and home directory attributes configured.

It’s certainly very possible to do this with command-line tools such as AdFind or DsQuery, but users may also find it useful to have a saved query available within the Active Directory Users & Computers console for easy reference.

The way to do this is define a custom query using this string:

(objectCategory=Person)(objectClass=User)(uidNumber=*)

If you add just this text and nothing else in the “Find Custom Search” dialog box (the Advanced tab), then the console will automatically add ampersands and additional parentheses to turn it into a “proper” LDAP query that will show you any account that has a UID number configured. Certainly, additional fields like loginShell or unixHomeDirectory could be added as well, but this query will probably be sufficient for most instances.

I started not to publish this, but figured if I couldn’t remember the exact syntax then someone else might not be able to remember the syntax either. This one is as much for me as it is for others.

Tags: , , ,

« Older entries