Документ взят из кэша поисковой машины. Адрес
оригинального документа
: http://www.apo.nmsu.edu/Telescopes/SDSS/mirror_report/node3.html
Дата изменения: Thu Nov 25 00:45:50 1999 Дата индексирования: Sun Apr 10 00:20:43 2016 Кодировка: Поисковые слова: manicouagan crater |
All of the Observers looked through all the night logs to see whether they could find any problems or unusual behaviour with the mirrors. We did, in fact, find mention on the nights of 14 October 1999 and 15 October 1999, that there were problems with both the secondary and the primary mirrors.
On the night of 14 October 1999, the log reported:
1049: setup frames for focus, lskips, etc. Focus loop was started, and we were getting images about 1.2'' to 1.5''. We then did a couple of lskips and found that the rotator angle was +0.096 +- 0.454. The seeing stayed at about 1.3'' by 22:00, but on the all-sky camera we were picking up cirrus. We finished up the scan at 22:00 and decided to reboot the the secondary galil. This did not cure the persistent message we were getting from the TCC:0 0 F Modu="prt_Read"; Text="read timed out on port LTA10:"
0 0 F Modu="mir_o_MoveOneGalil"; Text="could not specify new positions for Sec mirror actuators"
so we logged out the port. This did fix the problem but then we needed to homed the secondary. We had some problems homing the secondary, as it did not find home position the first few times. On the 3rd try we obtained the following:
"unable to find reverse limit switch or unable to bounce".
After this, we were able to home and restart collimate.
1050: new scan at equator, meridian but using the swindle this time (this is a test to see if the swidle data will populate the headers of the astrom files correctly with RA and Dec as the appropriate keywords in the header.) We found the new focus = -1600, and then restarted the focus loop. The seeing was about 1.4''. We looking at the focus loop plot however, we found that the secondary did not appear to be doing anything. As we were getting ready to go the focus sweep, we obtained the following messages from the TCC:
0 0 I TCCStatus="TTT","NNN"; TCCPos=-0.66,57.21,89.31; AxePos=-0.66,57.21,89.31
0 0 F Modu="prt_ReadReplySet"; Text="too many replies on port LTA19:"
0 0 I Modu="prt_ReadReplySet"; Text="the following replies were read:"
0 0 I Modu="prt_ReadReplySet"; Text="ERR: OUT OF MEMORY"
0 0 F Modu="axe_o_Move"; Text="command failed for IR"
0 0 I Modu="prt_ReadReplySet"; Text="the following replies were read:"
0 0 I Modu="prt_ReadReplySet"; Text="ERR: OUT OF MEMORY"
0 0 F Modu="axe_o_Move"; Text="command failed for TEL1, TEL2, IR"
0 0 I Modu="axe_o_Init"; Text="initializing axes: TEL1, TEL2, IR"
0 0 I Modu="axe_o_Init"; Text="locking ports: TEL1, TEL2, IR"
0 0 I Modu="axe_o_Init"; Text="resynchronizing I/O stream for: TEL1, TEL2, IR"
0 0 I Modu="axe_o_Init"; Text="initializing and setting time of: TEL1, TEL2, IR"
0 0 I Modu="axe_o_Init"; Text="sending init files (if they exist) to: TEL1, TEL2, IR"
0 0 I AxisInit="TTT"
0 0 F Modu="axe_i_SchMove"; Text="tracking failed; axes halted"
0 0 F Modu="exe_Track"; Text="no longer tracking"
0 0 I TCCStatus="TTT","NNN"; TCCPos=-0.66,57.21,89.31; AxePos=-0.66,57.21,89.31
0 0 I TCCStatus="HHH","NNN"; TCCPos=NaN,NaN,NaN; AxePos=-0.66,57.21,89.31
We did an "axis stop" immediately, which caused all the Modu="prt_ReadReplySet"; Text="ERR: OUT OF MEMORY" messages to be repeated.
As we were parked on the equator, we continued to try and get data for the focus sweep, but found that, no matter where the focus was requested, there was no motion of the secondary. This prompted a telephone call to Connie, who advised that collimation and focus moves will not work unless the telescope is tracking. We then determined that port LTA19 is the communications port between the TCC and the MCP. We then rebooted the MCP (which then required resetting both the valid AZ fiducial, as discussed with French, and refiducialising the telescope. This was finished up at about 01:30.
At the beginning of this night, we were getting error messages from the TCC about the seondary galil, although commanding the focus to change did appear to cause a change in the image quality. Since we were planning on getting focus sweep data, we wanted to make sure that the secondary mirror was responding properly. We decided to go ahead and power-cycle the galils to try and get rid of the error message. After power-cyling the galils, the error message did go away, but it was clear that setting the focus was not changing the image quality. We then attempted to home the secondary. It took 4 tries to get the secondary to respond. Later that same night a focus sweep was successfully completed and the analysis of that data will be discussed below.
On the night of 15 October 1999, the report was somewhat similar to the previous night, but with a slight difference:
...We did some flats and arcs amd then slewed to the centre of plate 189 on cartridge 2 at a nominal focus of 1500. Using an easyFiber, we could not find any stars. We tried to change the focus and we seemed to find the mirrors move (i.e. the values reported by migDebug did change). At 20:50 we power cycled the galils on both the primary and secondary to see if we could induce anything from the mirrors. We ended up homing both the primary and the secondary as the reboot seemed to have no effect. We had continued problems with getting the secondary and the TCC to communicate and Craig finally had to do a stop TCC to get the LAT functional again.
The report was of both mirrors again being unresponsive to commanding, and a total logging-out of the TCC being required for the commanding to become effective again. After this point, the secondary seemed to be responsive, and data was taken. Mention is also made to the point that the seeing was on the order of 1.5 to 2. If the fractures to the secondary were made on either of these nights it did not impact the science data obtained severely or obviously (from visual inspection of the data as it came from the telescope).
Certainly there is prima-facæ evidence to support the idea that there may have been something very wrong with both of the mirrors on these 2 nights, but from the evidence of the night logs, it appears that most of the problems were those of connectivity - not motion, once the connectivity was re-estabished. Indeed, most of the night logs for this dark run mention that the primary was found to oscillate, and had to be re-homed nighly, and put in specific locations before the telescope was stowed away every morning. It should be pointed out that there was a power outage at the observatory during the day of 08 October 1999 which caused a great deal of havoc with the imaging camera, the archiver, and other instrumentation around the observatory, and which had long-lasting effects for many days afterwards. It is certainly plausable that the effects of the power outage caused the secondary actuators to be moved in a peculiar way that caused a stress on the mirror, and that this stress was not evidenced until the temperature drop on the weekend of the 16-17 October 1999 fractured the glass. Plausable, but not too very convincing, as we have 3 sets of focus sweep data from different times during the run, all of which are consistant with each other and which show no obvious evidence of peculiar stressing of the secondary - something that might be expected to show up on the extreme out-of-focus images from the focus sweeps.