Meraki Dashboard Active Directory Integration not working with Server 2012 R2 Domain Controller: Unknown Error

We recently ran into an issue where we should not get a Meraki Security Appliance (MX) to integrate with Microsoft’s Active Directory. The Meraki dashboard was not particularly helpful in identifying why the connection was not working. The Event log just kept repeating the following error:

Unable to connect to Domain Controller. user: <username>, domain: <short name>, server: <server’s IP>

We did identify that the Username, Password, and IP were correct and that the MX could ping the Domain Controller.

Our next step was to perform a packet capture of the traffic between the MX and the Domain Controller. In the output of the .pcap file, you can see a client hello packet that’s trying to negotiate with the server trying to use 65 different supported cipher suites. The Server is responding with an immediate RESET response, which normally indicates that these suites are not supported.

NonWorking

Our Domain Controllers were Server 2012 R2 systems, during our search for a solution to the issue, We came across this KB: KB2919355

It’s important to note that this KB is actually a collection of 6 files that all need to be run, but in a specific order. During the running of the file for 2919355, we ran into another seperate issue where that file would not install. That problem was solved by running the following two hotfixes: Hotfix 2939087 and Hotfix 2975061. We then were able to install 2919355, as well as the remaining updated in the first KB article. Post reboot we were able to see the Meraki Dashboard report that it was now able to communicate with the Domain Controllers.

We ran another packet capture, and the output of the .pcap is displayed below. You can see that the Client Hello is now met with a Server Hello response packet instead of a RESET. If we dig down and view the cipher suite of the response we see that AES 256 SHA384 is being used, which apparently was not supported on Server 2012R2 before the above KB was installed.

Working

Attached Files:

Automatic AWS Snapshots with Replication to another Region

I’ve recently started placing Ubuntu web servers up on AWS. These are pretty small systems, that don’t utilize amazon’s Database or Elastic Load Balancer features, they’re just stand alone all-in-one systems, and are relatively small.
I wanted a way to protect these systems in case amazon ever had an event where a region was down, or unstable, which occasionally does happen. If this were a larger deployment we’d have some sort of real-time database replication between availability zones, and an Elastic Load Balancer that would allow us to seamlessly fail over. In my case, I just want the comfort of knowing there is a copy of the volume in another region, and I want it to happen automatically.

I came across a few posts which had parts of what I was looking for, but not everything. I started with this awesome script by CaseyLabs which can be found here: https://github.com/CaseyLabs/aws-ec2-ebs-automatic-snapshot-bash 

I modified it slightly to provide more verbose logging, and I added a section to both the “snapshot_volumes” and “cleanup_snapshots” functions. I also modified the IAM Security Policy to allow for copying snapshots. We’ll get into all of this in a bit, but before we start FAIR WARNING: I’m not a developer, and you use this script at your own peril. It creates snapshots & copies data (which both have costs associated with them) and deletes snapshots. There are lots of things that could go wrong if you do not take the time to understand what you are doing with this script.

First things first, let’s crate the IAM Security Policy

Creating IAM Security Policy

  1. From the main AWS menu select “Identity & Access Management”.
  2. Click “Policies” in the left hand pane
  3. Click “Get Started”
  4. Click “Create Policy”
  5. Click “Select” next to “Create Your Own Policy”
  6. Enter the following:
    1. Policy Name: manage-snapshots
    2. Description: Allow Servers to create and manage snapshots of themselves
    3. Policy Document:
    4. Click “Create Policy”
  7. Click “Groups” in the left hand pane
  8. Click “Create New Group”
  9. Name the group “Snapshot_Managers”, click “Next Step”
  10. Select the group policy “manage-snapshots” and click “Next Step”
  11. Click “Create Group”
  12. Click “Users” in the left hand pane
  13. Click “Create New Users”
  14. In the “Enter User Names” box enter “snapshot-manager”
  15. Click “Create”
  16. Click “Show User Security Credentials”, note both the Access Key ID and Secret Access Key.
  17. Click “Close”
  18. Select the newly created user
  19. Click “Add User to Groups”
  20. Select “Snapshot_Managers” and then click “Add to Groups”

Install and Configure the Script

Install AWS CLI

  1. Login as your admin user
  2. Enter the following commands:

Configure AWS CLI

  1. Enter the following commands:
  2. When prompted enter the Access Key ID for the snapshot-manager account created earlier. Press Enter
  3. When prompted enter the Secret Access Key for the snapshot-manager account created earlier. Press Enter
  4. When prompted to enter the Default Region Name enter: us-west-2a (this is the availability zone that my servers are in, yours will vary)
  5. When prompted to enter Default Output Format, enter: text

Download and configure script.

The script can be downloaded and viewed from here. (rename to .sh)

Notes about the script:

  • The User’s home directory will hold the AWS CLI configuration files that directory needs to be set within the script
  • it’s hard set to wait 10 minutes between when it starts a snapshot, and when it attempts to copy that snapshot to a new region. If your snapshots are huge, this may need to be adjusted.
  • It’s configured to delete any snapshot older than the retention period, which is currently 7 days, if you want a longer retention period, this should be adjusted
  • The zone that we’re replicating the snapshots to is hard set as us-east-1, this will need adjustment if you want snapshots copied elsewhere. It also uses the description component of the remote snapshot to hold the name of the original snapshot, this is important, as when the original is deleted, that original snapshot id is used to query the remote region for snapshots whose descriptions match, and delete those as well.

Instructions

  1. Enter the following commands

Configure Cron Job

  1. Enter the following commands
  2. When prompted, select “2. Nano” as the editor. Add the following line to the end of the file:
  3. This line will run the script on minute 0, of hour 23, on every day of the month, of every month of the year, but only if that day is sunday (0), explanation below
  4. Press “Control + O” to write the file and “Control + X” to exit crontab

Sources

First and Foremost, 90% of this was written by CaseyLabs, which can be found here

Cron Job information that was the most helpful was found here

Copy-Snapshot documentation can be found here

Documentation on describing EC2 snapshots can be found here

Helpful crontab troubleshooting tips can be found here

 

Installing or Updating OpenManage on ESXi Hosts

Assuming you are using Dell Servers, you might be interested to know that you can install and use the Dell Open Manage Server Administrator application on your ESXi hosts and manage their hardware in nearly the same way you can use it on your windows servers’ hardware. First, OpenManage needs to be downloaded,  you can find it here: OpenManage. Make sure to download both the windows version (for the system that will be managing the ESXi host) as well as the version that matches your ESXi version.

Next, we need to move the vib over to the host. The way i do this is with a tool call WinSCP which can be found here

Enabling SSH on ESXi host

  1. Connect to the host with the vSphere Client
  2. Select the Host, and then click the “Configuration” tab
  3. Click “Security Profile” from the bottom right-hand box
  4. Click “Properties…” in the row titled “Services”
  5. Highlight “SSH” and then click “Options…”
  6. Click “Start”
  7. Click “OK”

Moving file to host with WinSCP

  1. Open WinSCP and enter the following:
    1. File Protocol: SCP
    2. Hostname: <IP Address of Host>
    3. Username: root
    4. Password: <root password>
  2. When prompted click “Yes” to accept the private key of the server
  3. In the right hand pane find the folder called /tmp and double click on it.
  4. In the left hand pane, locate the .vib on your PC and then copy it into the /tmp folder.

Installing OpenManage

  1. place the host in maintenance mode
  2. SSH into the host using putty
  3. Enter the following command, making adjustments to the file name to match that of your vib:

Now reboot the host.

Updating the OpenManage on the host

  1. place the host in maintenance mode
  2. SSH into the host using putty
  3. Enter the following command to confirm open mange is installed on host, scroll up looking for VIBs from “Dell” and then verify that “OpenManage” is in the list
  4. Next, run the following command to remove the existing version of the open manage:
  5. Lastly follow the installation instructions above, to install the newest version.

Managing the Host

  1. Install the windows version of the software that was downloaded earlier on you management system
  2. Double click on the OpenManage application once installed,
  3. Enter the following:
    1. Hostname/IP Address: This is your host’s IP
    2. Username: root
    3. Password: root password
    4. Ignore Certificate Warnings: Checked

Now you can manage your host’s Dell hardware as if it were any windows system with Open Manage installed.

NOTE: Annoyingly, both the MGMT system and the hosts must be using the same version of the software, in this example that was OpenManage 8.2

WDS Capture Error using 2012 R2 or 8.1 install.wim \windows\system32\boot\winload.exe Status:0xc000000f

I recently ran into an issue were a capture wim image created from a windows 8.1 (x64) and Server 2012 R2 install.wim imaged repeatedly failed on boot with the error:

Windows failed to start. A recent hardware or software change might be the cause. to fix the problem:

  1. Insert your windows installation disc and restart your computer.
  2. Choose your language settings, and then click “Next.”
  3. Click “Repair your computer.”

if you do not have this disc, contact your system administrator or computer manufacturer for assistance.

File: \Windows\System32\boot\winload.exe

Status: 0xc000000f

Info: The application or operating system couldn’t be loaded because a required filed is missing or contains errors.

 

I was able to solve this issue be mounting the wim with imagex, changing nothing, and then unmounting the wim using the /commit argument.

Follow these steps (assuming your file is located c:\capture.wim and your mount directory is c:\mount)

Once the image was committed, I opened the WDS console, selected the current Capture image, and selected “Replace Image…”. I then pointed to the c:\capture.wim file previously edited.

I then rebooted the client, and tried the capture image again, this time it worked without issue. I’m not sure what mounting and unmounting the image did, but i suspect perhaps it validates or changes certain files during the mounting and unmounting that are required for the image to be bootable.

How to have AWS instances update Route53 CNAME each time they boot

Recently I had a need for my AWS instances to dynamically update CNAME records each time they started. You’ll only get a dedicated IP if you purchase an Elastic IP, and then, only 5 per account unless you reach out to Amazon for more. Knowing that I’m both cheap and lazy, I wanted something that would be free, as well as, automatic. I found quite a few blogs and articles that were a big help, but no one ‘put it all together’ for me. After about 6 hours I’ve got a fully working solution, but please feel free to comment on where it can be improved.

This article makes the following assumptions: Ubuntu 14.04 LTS is being used as instance OS, External DNS domain is public, and hosted on Route 53.

I’m neither an AWS nor Linux daily user, so if you see something that could be improved, please do let me know.

Create AWS User, Group, and Policy for Dynamic DNS

  1. From the main AWS menu select “Route 53”
  2. Click “Hosted Zones” in the left hand column
  3. Click “Create Hosted Zone”
  4. Enter the Domain Name that will be updated by Servers, this can be a subdomain if desired.
  5. Type: Public Hosted Zone
  6. Click “Create”
  7. Once created, note the zone ID for later.
  8. From the main AWS menu select “Identity & Access Management”.
  9. Click “Policies” in the left hand pane
  10. Click “Get Started”
  11. Click “Create Policy”
  12. Click “Select” next to “Create Your Own Policy”
  13. Enter the following:
    1. Policy Name: change-dns-records
    2. Description: Allow Servers to update their own CNAME Records each time they reboot.
    3. Policy Document:
    4. NOTE: Replace <Zone ID> with the zone ID of the DNS zone the server needs to update.
    5. Click “Create Policy”
  14. Click “Groups” in the left hand pane
  15. Click “Create New Group”
  16. Name the group “DNS_Editors”, click “Next Step”
  17. Select the group policy “change-dns-records” and click “Next Step”
  18. Click “Create Group”
  19. Click “Users” in the left hand pane
  20. Click “Create New Users”
  21. In the “Enter User Names” box enter “dns-editor”
  22. Click “Create”
  23. Click “Show User Security Credentials”, note both the Access Key ID and Secret Access Key.
  24. Click “Close”
  25. Select the newly created user
  26. Click “Add User to Groups”
  27. Select “DNS_Editors” and then click “Add to Groups”

Install and configure CLI53

  1. Grab the URL for the most recent version from here: https://github.com/barnybug/cli53/releases/latest Make sure to download the proper version (I’m using AMD64)
  2. Login to Ubuntu instance and perform the following commands (Download cli53, move to /usr/local/bin, change permissions, create a sybolic link in /usr/bin, create route53 config file, and secure it):
  3. Edit the /etc/route53/config file and enter the following:
  4. NOTE: Replace <dns-editor’s access key ID> and <dns-editor’s secret access key> with appropriate values from the dns-editor user. Update YourDomain.com to match either your top level, or a subdomain of one of your domains.
  5. Next, create a file called /usr/sbin/update-route53-dns.sh Enter the following into the file:
  6. NOTE: replace [Client_URL_ShortName] in the above text with whatever you want to CNAME to be, I use the hostname of the server, but you could use anything (www. testing. mail. )  etc.
  7. NOTE:  it should not be necessary to have to delete the record and then re-create it, the –replace flag should be able to do that in a single command, however I could not get it to work in cli53 build 6.5.0, which is what was used here. I had to delete the existing CNAME and then re-create it. I also noticed that it is case sensitive, and always created as lower case, so in your delete command you need to make sure you are specifying the record to delete in all lowercase.
  8. NOTE: in some ami distributions ec2metadata needs to be replaced with ec2-metadata
  9. Lastly we need to add the script to the logon scripts that run during boot, enter the following commands:
  10. Reboot instance and verify that it’s created a CNAME for itself.

Place Cisco 1720i Access Point into Autonomous mode

If you are like me you occasionally need to setup a single AP into a site either too small for a controller, or unwilling to pay the extra costs associated with one. Here are the steps required to change to Autonomous mode, as I believe that all of the x702i series are shipping in lightweight mode by default.

  1. Log into www.cisco.com
  2. Click “Support” at the top
  3. Click the “Downloads” tab
  4. Select the “Wireless” from the left hand pane”
  5. Select “Access Points”
  6. Select “Cisco 1700 Series Access Points”
  7. Select “Cisco Aironet 1702i Access Points”
  8. Click “Autonomous AP IOS Software”
  9. Ideally, you are looking for the highest number firmware revision that’s marked as MD, or GD. In some cases you’ll only see ED revisions, downloaded the highest revision number. Click the “Download” button, and agree to the terms of service.
  10. Connect a network cable from your PC to the AP.
  11. Start a TFTP server on your computer, and set your interface to 10.0.0.1 255.255.255.0.
  12. Open a Serial connection to the AP, after it finishes booting log in. [Default Password:Cisco ]
  13. Enter the following commands, pressing enter after each line:
    1. enable
    2. debug capwap console cli
    3. debug capwap client no-reload
    4. capwap ap ip address 10.0.0.2 255.255.255.0
    5. capwap ap ip default-gateway 10.0.0.1
    6. Archive download-sw /force /overwrite tftp://10.0.0.1/%File Name%.tar
  14. The AP will reboot automatically. After its finished the reboot, log back in and issue the following command:
    1. show version
  15. Verify the AP is now running the updated image, and that you have access to the full suite of commands.

NOTE: you’ll notice that you keep getting a capwap error while the AP is in lightweight mode, if you are having trouble entering these commands because of it, put them all into a notepade file, wait for the error to appear, and then quickly paste them all in at once.

Configuring Cisco 1702i Autonomous access point for use with Chromecast

Assuming you’d like to connect a client computer to your chromecast via the same WLAN on your Cisco AP, all you need to do is follow the below steps.

Assumptions:

  1. The Cisco AP is configured with only a single SSID, and that SSID happens to be associated with vlan 2.
  2. You are broadcasting the WLAN on both radios
  3. All the required configuration to support clients (SSID, Auth, etc) is already setup

Instructions:

  1. Login to your AP using the command line, issues a show run
  2. The output of your show run command should have some lines similar to what’s below:

!
interface Dot11Radio0.2
encapsulation dot1Q 2
no ip route-cache
bridge-group 2
bridge-group 2 subscriber-loop-control
bridge-group 2 spanning-disabled
bridge-group 2 block-unknown-source
no bridge-group 2 source-learning
no bridge-group 2 unicast-flooding
!
interface Dot11Radio1.2
encapsulation dot1Q 2
no ip route-cache
bridge-group 2
bridge-group 2 subscriber-loop-control
bridge-group 2 spanning-disabled
bridge-group 2 block-unknown-source
no bridge-group 2 source-learning
no bridge-group 2 unicast-flooding
!

So if our SSID is affiliated with vlan 2, we’ll need to issue a command to each of our sub interfaces using that vlan: Dot11Radio0.2 and Dot11Radio1.2

Enter the following commands:

  1. # config t
  2. (config)# interface Dot11Radio0.2
  3. (config-if)# no bridge-group 2 port-protected
  4. (config-if)# exit
  5. (config)# interface Dot11Radio1.2
  6. (config-if)# no bridge-group 2 port-protected
  7. (config-if)# exit

Now we’ll also need to globally stop IGMP snooping, we’ll enter this additional command from global config mode:

  1. # config t
  2. (config)# no ip igmp snooping
  3. (config-if)# exit

Lastly just save your config and test, you should now be able to connect to the chromecast from a wireless client connected to the same WLAN as the chromecast, or if you followed the previous post on configuring a sonicwall for use with a chromecast (located here), a wired client connected on another interface of your sonicwall.

 

Configuring Cisco Wireless LAN Controller for use with Chromecast

This post is mostly to compliment the post on configuring SonicWALL with Chromecast (found here) since it’s fairly easy, but never the less here we go:

Assuming your Chromecast is connected to the same Wireless LAN as your client, the only thing you need to do is Login to your WLC and enable multicast. Follow these steps:

  1. Load the WebUI of your Cisco Wireless Lan controller.
  2. Click Controller tab at the top
  3. Click Multicast in the left hand tab
  4. Check the box titled Enable Global Multicast Mode
  5. Click Apply in the top right of the page
  6. Click Save Configuration in the top right of the page

That’s it, you should now be able to see the chromecast from any wireless client connected to the same WLAN as the chromecast.

Configuring SonicWALL to work with Chromecast

In this write up I’m making the following assumptions:

  1. Your Client (Device trying to Connect to Chromecast) and your Chromecast are on SEPARATE zones. For example: Client is on the LAN and the Chromecast is on say, a wireless zone.
  2. The Wireless Access Point(s) that the Chromecast is connected to are already configured for Multicast (Ill have some other posts for how to configure those later. Update: Here and Here)

First off we need to enable multicast, and here is how we do that:

  1. Login to the SonicWALL and click Firewall Settings from the left hand pane
  2. The menu will expand, when it does click Multicast
  3. Check the box titled Enable Multicast
  4. UnCheck the box titled Require IGMP Membership reports for multicast data forwarding
  5. Select the radio button titled: Enable reception of all multicast addresses
  6. Click Accept at the top
  7. Now click Network from the left hand drop down, and when the menu expands click Zones
  8. For each Zone that will be participating with chromecast, click the configure icon, Check the box titled Allow Interface Trust if it’s not already selected. Click OK
  9. From the Network menu on the left click Interfaces. For each interface that’s part of any zone configured in step 8 perform the following: Click the configure icon for the interface, click the Advanced tab, check the box titled Enable Multicast Support. Click  OK.
  10. Now click Firewall from the left hand drop down, and when the menu expands click Access Rules
  11. Select the radio button titled Matrix at the top
  12. For each zone that was configured in step 8, select the rule from ZONE to MULTICAST
  13. Ensure that there is an ALLOW rule with ANY listed for Source, Destination, and Service. If there is not an ALLOW ANY ANY ANY rule, create on. Repeat for each Zone that was configured in step 8
  14. Update: as James points out below, you also need a traditional  bi-direction Allow rule between both zones.

Testing

  1. From your Client, open Chrome and download the extension “googlecast” if it’s not already installed.
  2. Verify that when you try to cast a tab, the Chromecast that’s located on the other interface/zone is listed.

Exchange 2003 to 2010 Upgrade, Error with Free/Busy folder replicas

I’ve been migrating some of our slower customers away from Exchange 2003 recently, and I ran into a issue that took me 3 days to figure out. I was getting the following erorr on the 2010 Server: Couldn't find an Exchange 2010 or later public folder server with a replica for the free/busy folder: EX:/O=FIRST ORGANIZATION/OU=FIRST ADMINISTRATIVE GROUP despite successfully running the AddReplicaToPFRecursive.ps1 on the \NON_IPM_SUBTREE\ Top Public Folder. On the 2003 server, all folders, including the Free/Busy folders displayed two replicas, one for 2003 and one for 2010.

I eventually moved all replicas from the 2003 server and removed the PF database hoping that it would force any replicas that remained and perhaps weren’t being displayed properly over to the 2010 server, and I was prompted to do just that, but still I continued to receive the error stated above.

The reason that I’m posting this is not that it’s a new issue, it seems like it’s a pretty common problem, but rather the hard time I had finding the correct resolution online. Perhaps my search terms were off, but in case The way to solve this is as follows:

On the 2010 server, run the following commands:

1) Set-PublicFolder -Replicas '2010 Public Folder Database' -Identity '\NON_IPM_SUBTREE\SCHEDULE+ FREE BUSY\EX:/o=First Organization/ou=First Administrative Group'

and then verify it worked:

2) Get-publicfolder -recurse '\Non_IPM_SubTree\SCHEDULE+ FREE BUSY\EX:/o=First Organization/ou=First Administrative Group' |fl

3) restart the “Microsoft Exchange Mailbox Assistants” service