更改 Active Directory 域控制器的计算机名称,一般不应该从 Windows 的设置应用中,直接使用”重命名这台电脑“按钮进行修改。也不应该从”系统属性“面板中,点击”更改“按钮来重命名计算机。因为这可能会造成各种问题,例如造成计算机属性和 AD 数据库中的信息不一致,导致在下次登录服务器时:
The SAM database on the windows server does not have a computer account for this workstation trust relationship.
根据 Mozilla 的 MDN 文档1中关于 BFC(Box Formatting Content,块级格式化上下文)的说明:
A block formatting context is a part of a visual CSS rendering of a web page. It’s the region in which the layout of block boxes occurs and in which floats interact with other elements.
…
Formatting contexts affect layout, but typically, we create a new block formatting context for the positioning and clearing floats rather than changing the layout, because an element that establishes a new block formatting context will:
Both examples are ill-formed in C++. If a compiler does not diagnose the latter, then it does not conform to the standard.
…
You use a language extension that allows runtime length automatic arrays. But does not allow runtime length static arrays. Global arrays have static storage.
UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node sequential/conv2d/Conv2D (defined at C:\path_of_envs\lib\site-packages\tensorflow_core\python\framework\ops.py:1751) ]] [Op:__inference_distributed_function_985]Function call stack:distributed_function
By default, TensorFlow maps nearly all of the GPU memory of all GPUs (subject to CUDA_VISIBLE_DEVICES) visible to the process. This is done to more efficiently use the relatively precious GPU memory resources on the devices by reducing memory fragmentation. …… In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as is needed by the process. TensorFlow provides two methods to control this.
The first option is to turn on memory growth by calling tf.config.experimental.set_memory_growth, which attempts to allocate only as much GPU memory as needed for the runtime allocations ……
gpus = tf.config.experimental.list_physical_devices('GPU') try: for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) except RuntimeError as ex: print(ex)