The deghosted area uses pixels from only one of the source photos instead of some average of them, so maybe the selected source photo was darker even after normalizing the EV of all the photos?
Without seeing the photos it's hard to even guess what you're talking about, specifically.
The HDR Merge screen shot shows the overlay. In particular, notice the curved area just above the brightest spot in the sky on the left. Then look at the other screen shot of the DNG in Lightroom and find the same area. You will see the silhouette of the deghosting overlay. I tweaked the Lightroom sliders as shown in the screen shot only to make the overlay show up better.
My analysis would be that there is nothing wrong with LR's HDR function at least as demonstrated by these two photos and shouldn't be something that has only now happened with the xx.6.1 update.
In my opinion, the lighting changed as the sun was setting--the brightness of the clouds is actually darker in the second shot, so when LR normalized the exposure of the second photo to make it as bright as the first photo, using only the difference in exposure time of 0.8 seconds vs 0.6 seconds, the second photo was still slightly darker. So when deghosting is turned on, this makes LR use only the normalized pixels of the second photo, which is slightly darker than the pixels of both photos averaged together so this difference in subject brightness is obvious across the deghosting area boundary.
To see how the sky changes from one photo to the next, reset the settings on both photos to the defaults, mostly 0s or As Shot for things that would affect brightness, then boost the Exposure setting of the second photo to +0.70. This makes the ground approximately the same brightness between the two photos. Once you do this, then switch to Library, and arrow back and forth between the two photos and you'll see the V of light from the hole in the clouds narrow very slightly in the second photo thus making the clouds very slightly darker.
The way to avoid this would be to take the two photos as a burst a fraction of a second apart, instead of the 8 seconds which is enough for the sun to have set a little more and the overall brightness of the scene to change, especially the light reflected on the clouds.
A more general issue with using these two photos for HDR is that they really aren't much different in EV (exposure value) so the areas that are too bright to show any detail are the same in both photos, and the rest of each photo has enough detail for toning everywhere but the brightest area. Since these two photos are similar EV, then just pick one and tone it and forget doing HDR.
For an HDR scene you want one photo to have detail in the brightest area with other areas perhaps too dark, then the other photo have enough detail in the darkest areas and the brightest areas too bright, which of an image that includes the sun will be several stops not less than one EV. It's possible to need more than two photos but many times, two is enough.
I appreciate the replies very much, but I'm not sure the issue was understood. I know how to bracket images for HDR.
Yes, I know these two images were not properly bracketed. They were two single exposures made a couple seconds apart without using in-camera bracketing.
But HDR itself was not the issue. The issue was the visible presence of the red de-ghosting overlay in the merged image. It does not show up as red, but is visible as a darker area of the image. That it is a relic of the red overlay is obvious.
I'm not going to worry about it or pursue it right now, as I have too many other things going on. I believe that if I had unchecked the box to show the de-ghosting overlay before merging, the "overlay ghost" on the image would not be present. I have not confirmed that, but from now on I will be sure to uncheck that box before merging the images.
Thanks again for replying!
Jeffery is basically agreeing with my explanation then adding to use only one image as the best solution to this particular issue, seeing as it's not a bug but an issue with the two images chosen for the HDR process.
HDR expects the subject to be illuminated the same in all the sources images, with only the camera exposure parameters being different, so constant subject illumination brightness and different EV-brightness.
With your two images, the subject illumination is not the same because the sun set slightly during the 8 seconds that elapsed between them, as well as the EV-brightness was very similar, less than a stop difference between the two, so Jeffery's suggestion of using only one, and not trying to do HDR, is probably the best.
Given a different set of images, taken a fraction of a second apart with some sort of burst mode so the illumination difference wouldn't be noticeable, and with more difference in exposure parameters, several EV, then the images wouldn't have a noticeable line in the clouds, and the range of values of the HDR result would actually be more than a single raw image could hold.