All-sky search algorithms for monochromatic signals in resonant bar GW detector data

In this paper we design and develop several filtering strategies for the analysis of data generated by a resonant bar Gravitational Wave (GW) antenna, with the goal to assess the presence (or absence) in them of long duration monochromatic GW signals, as well as their eventual amplitude and frequency, within the sensitivity band of the detector. Such signals are most likely generated in the fast rotation of slightly asymmetric spinning stars. We shall develop the practical procedures, together with the study of their statistical properties, which will provide us with useful information on each technique's performance. The selection of candidate events will then be established according to threshold-crossing probabilities, based on the Neyman-Pearson criterion. In particular, it will be shown that our approach, based on phase estimation, presents better signal-to-noise ratio than the most common one of pure spectral analysis.


INTRODUCTION
It is generally believed that the most intense Gravitational Waves (GW) arriving in the Earth from remote sources in the Universe correspond to very short duration (∼1 millisecond) bursts, generated in the explosion of a supernova (Thorne 1987), or in gamma-ray bursters (Roland et al. 1994). Since their very first origins, cylindrical bar GW antennae have been applied to the detection of this sort of events (Weber 1969), and the more modern cryogenic bars have been used for this purpose, too, with considerably enhanced sensitivities (Astone et al. 1991;Astone et al. 1993;Hamilton et al. 1994): the long decay times of the oscillations of the bar make it well suited for the measurement of impulsive, short duration signals (Gibbons & Hawking 1971;Astone, Bonifazi & Pallottino 1990).
It so happens however that some cylindrical GW antennae have been in continuous operation regime for many consecutive months, even years. This is the case for example with the Explorer detector, owned and operated by the ROG group at Roma (Italy) and installed within CERN premises in Geneva (Switzerland) (Astone et al. 1993). Long term operation naturally provides the appropriate background for a search of monochromatic signals in the detector data, as requisite long integration times become available.
Monochromatic signals are most probably generated by the rotation of asymmetric stars, such as a pulsar or a neutron star. The intensity of the GWs strongly depends on the amount of asymmetry of the source, and this is in turn dependent on its equation of state (Bonazzola & Gourgoulhon 1996). Reasonably optimistic upper bounds on typical star parameters give an extremely weak signal estimation of h ∼ 10 −27 (Thorne 1987), which must be seen against a noisy background. Clearly, long integration times are required to reveal this kind of signal.
A systematic search for it must face a practical difficulty which derives from the fact that the signal is received in the antenna Doppler-shifted due to the daily and yearly motions of the Earth -in addition to possible internal motions within the source if it is e.g. in a binary system. Fourier analysis of long stretches of data results in high frequency resolution (Kay 1990), thence in signal spread across several spectrum bins if it is Doppler shifted. This can naturally cause significant reduction in post-filter signal-to-noise ratio. The problem is easily overcome if the source position in the sky is known (or assumed) ahead of time by means of suitable corrections based on ephemeris calculations. Analyses of this type exist in the literature: traces of a pulsar in the centre of the supernova SN1987A were sought in four days of data generated by the 30 metre Garching interferometer in March 1989(Niebauer et al. 1993, and Frasca and La Posta studied almost four years of data generated by the room temperature bar detector GEOGRAV in search for periodic signals from the Galactic Centre and the Large Magellanic Cloud (Frasca & La Posta 1991). More recently, Mauceli (Mauceli 1997) has looked for monochromatic GW signals coming from the region of Tuc 47 and from the Galactic Centre in three months of data generated by the cryogenic detector ALLEGRO at Luoisiana State University.
A different strategy must of course be used for an all-sky search. The philosophy of the procedure put forward by Frasca and La Posta (Frasca & La Posta 1991) consists in the construction of a large bank of spectra, taken over shorter stretches of data such that the frequency resolution in each individual spectrum be sufficiently low that daily Doppler shifted signals fit in a single spectral bin. Suitable comparison and averaging are thereafter applied to the spectra in order to draw conclusions about the intensity and/or bounds on signals. Astone et al. Astone 1998), have looked at one year (1991) of data taken by the above referenced Explorer detector to also perform an all-sky search for monochromatic sources of GWs. Their method is based upon local maxima identification in a bank of spectra, followed by close up analyses of frequency peaks looking for evidence of Doppler shift patterns across the duration of the entire data stretch.
In this paper we design and develop algorithms for the analysis of data generated by a resonant bar detector of GWs, in search for monochromatic signals within the system's sensitive frequency band. We are also interested in an all-sky search, but adopt a different point of view. Rather than scanning a bank of spectra, we propose to use a matched filter technique to estimate both frequency and phase of candidate signals, then set a threshold, using the Neyman-Pearson criterion, to select those events which have a given probability of crossing it as a consequence of pure random noise fluctuations. We have tested our methods in simulations with real Explorer detector data from 1991, and seen that they perform very satisfactorily. We plan to apply them to the massive processing of long stretches of data from the same antenna in a future paper, in order to provide complementary analyses to the procedures and methods already reported in  and (Astone 1998).
The article is structured as follows: in section 2 we present a few technical generalities and set the basic notation conventions. Section 3 is devoted to a detailed study of a situation in which the signal has a frequency exactly equal to one of those in the discrete Fourier spectrum of the data (Lobo & Montero 1997); this corresponds to an idealised situation whose consideration is methodologically useful, as it allows us to determine the signal's phase, and to investigate the statistical properties of the filter output; it also characterizes the main guidelines for the more realistic study in subsequent sections. In section 4 the method is illustrated with an artificially added signal to real detector data, which includes the estimation of the noise spectral density in the presence of such signal. In section 5 we address the real case, in which the signal frequency no longer exactly matches any of the discrete samples, so that it leaks across neighbouring spectrum bins (Lobo & Montero 1998), and also assess the statistical properties of the filter output (Montero 1998). Finally, in section 6 we apply the method again to real data with an external control signal added, and show that it works satisfactorily. The paper closes with a summary of conclusions and future prospects.

LINEAR DATA FILTERING
We begin with a review of some fundamental concepts of linear data processing, fixing also the basic notation which we will be using throughout this article.
In the general case, let u(n) (n = 0, . . . , N − 1) be the discrete set of samples which constitute our experimental data. A linear filter consists in a discrete set of numbers g(n; µi) depending on several parameters, µi, which acts on the experimental data as follows: producing what we shall call the filter output. It is usually assumed that u(n) is the sum of two different contributions: on the one hand the signal, x(n), whose presence we want to assess, represented by a deterministic function, and on the other hand the noise r(n), a stochastic process: For any choice of parameters, it is appropiate to ask for the filter response both to the signal, yx, and to the noise, yr, the latter being a stochastic process, too. The ratio of the mean square values of these quantities is termed in the literature the output signal-to-noise ratio (SNR), and it is a measurement of the performance of the filter g(n; µi). The theory of the matched filter (Helstrom 1968; Papoulis 1984) precisely determines, up to a global constant, the functional form which this must have, for given signal and noise, in order to maximize ρ. In our case we shall assume that the noise r(n) can be adequately modelled by a zero-mean Gaussian and stationary stochastic process, whereas the signal x(n) will be the response of the cryogenic resonant detector Explorer to a pure monochromatic GW (Pallottino & Pizzella 1984;Montero 1997), Here, A0 is the product of the amplitude by a conversion factor which defines the detector sensitivity at the frequency of the gravitational radiation, fg. This differs from f0 by a constant shift (Frasca et al. 1992), introduced by the data-acquisition system of the antenna, with the purpose to sample the antenna's full bandwidth 27.5088 Hz. The matched filter for such a signal is then functionally equal to the latter (Helstrom 1968), where B is an arbitrary constant.

The signal
As pointed out in the introduction, we want to develop in this article a general method whose operativity does not depend on the existence of prior information about the source. So, in principle, the value of the frequency f0 of the signal we can detect must be within the interval where 1/T is our Nyquist frequency. Obviously no search strategy can afford the endless analysis of all the frequencies in that window, so we shall be forced to select a finite set of frequencies to scan. Nevertheless, the very functional form of the filter shows us that we shall perform discrete Fourier transforms (DFTs) in its implementation, and this defines the set of frequencies which will be searched in actual practice: Moreover, for practical reasons, all the DFTs will be numerically computed using the fast Fourier transform (FFT) algorithm, a very optimized procedure which naturally computes at once all spectral components, with the only restriction that the number of samples be an exact power of 2.
In this section, we shall assume that the signal is well matched by the spectral template. By this we mean that the frequency of the signal is in fact one of those in equation (8), so that all the signal is in one single bin of the FFT, with no leakage to the neighbouring ones. More precisely, we shall be assuming that where k0 is one of 1, . . . , N 2 − 1, though we do not know which. We shall disregard the study of any k0 bigger than N/2 because it would be redundant since they represent nothing but negative frequencies. The value k0 = 0 is also disregarded since, among other considerations, it represents no wave at all, but a constant signal.
Summing up, the target of the present analysis will be to assess the presence of a signal x(n) = A0 cos(2πk0n/N + ϕ0) in the experimental data series u(n), using a matched filter g(n; k, ϕ) = B cos(2πkn/N + ϕ).
depending on the two unknown paramenters, k and ϕ, which we shall eventually estimate. Besides the advantageous property of the absence of frequency leakage in the filter output of such signals, equation (10) shows that x(n) is a periodic function over the entire processed period, because x(N ) is equal to x(0). In fact, this relationship holds for any sample, and it will be a crucial aspect for the developments which we shall introduce below.

The filter performace and the role of B
Let us compute the two quantities y 2 x and < y 2 r >, in order to evaluate the actual goodness of the filter. y 2 x is different from zero only if a signal is really present and the value of the parameter k matches k0, i.e. the filtered signal does not leak across different frequency bins, For < y 2 r (k, ϕ) > we have the more complex formula, where we have introduced the autocorrelation function of the noise, If we assume ⋆ that R(n) ∼ 0 for n ∼ N , it is clear that the second term shall become negligible in front of the first, and hence, It can be easily shown that the quantity we have just defined is the mean value of a periodogramme, a well-known way for estimating the power spectral density of the noise at that particular frequency, based on the Wiener-Khinchine theorem (Kay 1990).
Putting together expressions (13) and (16) we finally get for the SNR, the maximum value for ρ we may achieve with the present filter.
The SNR is obviously independent of the constant B, so we can freely set it at our will in order to provide y with some advantageous property. Our particular choice is, the factor that makes < y 2 r (k, ϕ) > equal to one. The statistical properties of the noise and the linearity of the filter guarantee that y is still Gaussian. Then its probability distribution will be completely settled once we know its mean < y >, and its variance, σ 2 y which in our case coincides with < y 2 r >, So, on the one hand, we have forced σy to take the same value regardless of the particular scanned frequency, and on the other hand, the mean of y(k, ϕ), ⋆ This, in fact, is a constraint on the number of samples that we want to filter. We must set N large enough to sufficiently exceed the correlation time.
c 1998 RAS, MNRAS 000, 1-17 shall be zero when either no signal is present in the data or, if there is a signal † , for any value of k other than k0. In this way, we have designed a bank of filters whose outputs corresponding to pure noise are statistically equivalent, and consequently can be directly compared .

Data splitting and averages
So far, we have implicitly assumed that N represents the total amount of stored information we have access to or, in other words, that we can analyze the whole data series in a single filter pass. This is, in many senses, a rather optimistic assumption. First of all, since we want to process several months of experimental data, it should not be surprising that the available (or even existing) computing facilities could not afford such a calculation. Moreover, the output of any experimental device, like Explorer , will not be uniform in quality along all the data acquisition time, and the stationarity of the noise is not preserved over too long periods of time. Then it could be worse to mix bad data (those, for instance, with a high level of noise) with good data in a single analysis, than simply to veto the stretch that we find unacceptable. But the gaps that we may introduce in rejecting samples of the experimental set are not the unique discontinuities that we shall find in the time series, because in such a long term operation of a detector it is not unlikely that the system suffers sporadic stops. Also the properties of the physical signal could be not so stable to be satisfactorily fit by our models along extended periods of time.
We shall thus consider that each series of length N is just one among a set of, say, M consecutive ‡ blocks . The reasons for the choice of the particular values of N and M must not necessarily coincide in general. In particular, it is possible that there exist several of those sequences of N × M data, eventually disconnected, which must be then processed separately.
So, we shall attach a new label α to each quantity in order to be able to specify which of the M blocks of N data we refer to: α is actually a shorthand which simplifies the notation, It is obvious, however, that computing yα(k, ϕ) for each α will not change the individual values of ρ, as shown by equation (19). Our final goal should be then to combine them in a suitable way which allows us to make the final SNR as high as possible. The definition (25) is, in this sense, very revealing because, when combined with (10) and (12), shows the most important feature of a non-leaking monochromatic wave: the signal xα in fact does not depend on α, We consequently see that, if the signal is present, each of these blocks contains an identical replica of the same stretch of sinusoid in them. This motivates us to define a new random variable z(k, ϕ), whose mean value does not differ from that in equation (23), but whose variance is reduced as a consequence of this averaging operation. Before we calculate explicitly this quantity and the value of the new associated SNR, we are going to focus on the problem of choosing the right value of the phase parameter of the filter, ϕ.
The conceptually simplest method is to compute z(k, ϕ) for a lot of different values of that parameter, then select the best,φ, i.e., that which gives a larger output after the filtering procedure.
Nevertheless, we do not need to go into such computationally long process, for the optimum valueφ can be analytically determined as follows. According to its definition, we may write down z(k, ϕ) as where † In fact, we will put together both cases, since we shall take the criterion of typifying through ρ 0 = 0 any frequency that contains no signal, whether or not there is a signal at some other value of k. ‡ The relaxation of this condition, allowing for the existence of missing whole blocks, introduces minor changes in all the following discussion. So the derived expressions can be easily adapted to this case.
is the DFT of uα(n). So, we define theφ imposing a local-maximum condition on z(k, ϕ): Here it is useful to introduce the two random variables, R(k) and I(k), because they can easily be combined to yield z(k,φ), where we have definedz(k) as the best output of the filter at a given frequency, extending the notation used with ϕ.
The statistical properties of the actual filter outputz(k) will strongly differ from those of z(k, ϕ), since the new random variable is the fruit of a non-linear filtering process. For instance, its mean is no longer equal to zero, even if there is no signal in the experimental data, as we shall see. This is the reason why we did not undertake a very detailed study of z(k, ϕ) in the first place.

Probability distribution ofz(k)
We shall build the probability density p(z) starting from p(R) and p(I). Those two auxiliary random variables are Gaussian by construction, and are statistically independent, as they are the real and imaginary parts of the Fourier transform of a stationary stochastic process. Then we only have to know the respective means and variances in order to complete the information that will fully settle their probability densities. The mean values of R and I can be readily found from their definitions and equation (23): while for the variances it can be found that both quantities are approximately equal, and because the correlation time is much shorter than the duration of the individual series, which is in essence equivalent to state that, in spite of their consecutiveness, they are mutually uncorrelated.
The hypotheses we have made lead us to the following expressions for the probability density of R and I: (39) § We can ensure thatφ defined in this way leads to a maximum and not a minimum of function z(k, ϕ) because its second derivative, Hence p(z) is given, after an integral is evaluated, by where we have used one of the integral representations of the modified Bessel function of order zero (Olver 1970), I0(y), In the absence of signal, i.e., when ρ0 = 0, equation (40) reduces to an expression which explicitly displays the statistical equivalence of all the frequencies which contain no signal, the property we want to achieve when we set the value of the constant B. Moreover, in this case,z is Rayleigh distributed or, in other words,z 2 follows a χ 2 distribution with two degrees of freedom (Papoulis 1990).

Mean and variance of p(z). A new SNR.
We are now interested in the mean and variance ofz. These correspond to the first two moments of the probability distribution p(z). It appears that a closed analytic expression can be found for the moments of any order, so we consider it here for completeness.
The m-th moment is defined by a calculation that becomes straightforward if one uses the relationship (Gradshteyn & Ryzhik 1980), where Ly(z) is the Laguerre function of order y, assuming the normalization condition, and Γ(y) is Euler's gamma function. The expectation value of the m-th power ofz is then, The most relevant moments for our purpose are, as has been said, the first and the second. The mean, as we announced, will be different from zero even when ρ0 vanishes, due to the property (45) of Laguerre functions, Nevertheless, the asymptotic behaviour of the Laguerre function also shows that <z >, when a signal is present, approaches the maximum mean value which according to (23) the random variables yα(k, ϕ) can possibly reach, It is worth noting that equation (49) ensures that the last expression holds not only when ρ0 ≫ 1 but whenρ ≫ 1, where we have introduced a quantity that plays the role of the new SNR. The same conclusion can be obtained after the study of the explicit expressions forz 2 x : and <z 2 r >, It is relevant to point that, in this case, SNR linearly increases with the total number of filtered samples, N × M : whereas the more classical procedure of averaging the square of the moduli of the DFTs leads tō what means that with our strategy for signals whose frequency is one of the FFT samples, we enhance by a factor √ M the value ofρ. It is almost redundant to say that the operative method we have just developed requires knowledge of the power spectral density of the noise. The aim of this section is the effective substitution of S(k; N ) in the definition of the constant B by a suitable estimate of this quantity obtained from the same data series.
Let us begin with a rearrangement of expression (18), There is one procedure in this formula that is certainly beyond our control: we cannot perform the statistical average. Our particular choice will be the substitution of that operation by a sum over the the entire rank of values of α, because S(k; N ), in spite of the formal aspect of (56), is independent of the block label . The same applies upon replacement ofx(k) (obviously also an unknown quantity) sincex(k) =<ũ(k)α >, Summing up, the random variable we shall use in order to estimate S(k; N ) is S(k; N ), where we have divided by M −1 and not by M because this way we get an unbiased estimator, i.e., < S(k; N ) >= S(k; N ).
Now we can replace the unknown spectral density by S(k; N )in any preceding expression, thus obtaining a new filter output Z(k) which, unlikez(k), we are able to compute directly from the raw experimental data uα(n). With an analogous procedure to the one already explained we obtain p(Z) and all its related quantities, including the corresponding SNR. Instead of starting from scratch, we shall calculate the probability density of Z in two steps, using previous results.
Let us introduce the auxiliary random variable W(k) which allows us to define Z(k) in a simple way, thanks to the fact that all the terms containing S(k; N ) in both random variables mutually cancel out. Since W andz are statistically independent, and p(z) was given in the last section, we have thus reduced the problem to obtaining p(W) and performing a final integration.
The probability density of W can be found in most reference books on Probability (Papoulis 1990), because it is the arithmetic mean of the squares of 2M − 2 zero-mean independent Gaussian variables with unit variances. So (2M − 2) W follows a χ 2 distribution with precisely 2M − 2 degrees of freedom,

The actual filter output Z and its distribution
The ratio in (61) which we have used for defining Z is a familiar one in elementary statistics, and is known to follow a Student's t-distribution ifz is a zero-mean Gaussian variable. Nevertheless, the expression for p(z) is far from a normal density function, as we have shown in (40), what compels us to perform an explicit calculation, leading to the result where once again we have made use of the formula (44). When no signal is present in the data at one particular frequency, expression (63) reduces to As a matter of fact, Z 2 in this case follows a Fisher's F -distribution with 2 and 2M − 2 degrees of freedom, because it is the ratio of two χ 2 random variables with those degrees of freedom, respectively.
In order to compute the moments of the density function of Z, the simplest approach is not to use the final expression of p(Z) but an intermediate formula that will avoid the problem of the integration of Laguerre functions with negative arguments. We thus find where we have chosen a layout that emphasizes the resemblance with the result corresponding to <z m >. The term inside the square brackets approaches unity when M ≫ m, and then we recover the formula (46). It is especially interesting to note that, in particular, the newly defined SNR remains unchanged. Let us split the second order moment of p(Z) in two terms, namely < Z 2 r > and < Z 2 x >, When no signal is present, ρ0 = 0, the value of < Z 2 > is merely due to the response of the noise to the filtering procedure, so we will accordingly assign to the signal the rest of the outcome, and therefore,

A practical example
This section is devoted to show the result of such procedure when applied on a small stretch of data taken by the Explorer in August 3rd of 1991, and successive days. The starting date was randomly selected since the final purpose of the present analysis is not extracting conclusions on the presence of GWs but on the practical performance of the method itself. We have thus externally introduced a sinusoidal signal with the required absence of frequency leakage in order to check the ability of the method for revealing it. The signal, corresponding to a GW with an amplitude of h0 = 10 −23 , was placed at about 921.4 Hz, near the detector's plus resonance (Astone et al. 1993). For this particular date the level of noise in the detector was such that the SNR for this signal was ρ0 ∼ 1/3 for a number of filtered samples of N = 131072, a little less than forty minutes. The signal was therefore completely buried in the noise. The specific value N = 131072 may seem arbitrary in this context, but it has a physical reason: it ensures that no Doppler shift can be observed in the individual blocks of N samples ). Once we set N we can pin down the precise frequency bin which contains the control signal: k0 = 50918.
Let us see what happens when we process six hours of data (M = 9). Looking at Figure 1.a it is by all means impossible to decide whether the signal is really present or not: SNR has only risen to a value near unity from the original 1/3 with such few blocks processed. By increasing M , i.e., processing longer stretches of data, SNR grows and the signal becomes progressively more distinct, as we see in Figures 1.b-d, corresponding to half a day, one day and two days of filtered data, respectively. We must stress at this point that the theoretical prediction of equation (51) that energy SNR grows linearly with the number M of processed blocks is very accurately observed in real practice, as we have numerically verified with the plotted data. The improvement by a factor of √ M relative to more standard procedures -see equation (55)-is thus firmly established not only in theory but also in actual practice.
With the output of the filtering process for the values of k other than k0 we can compute ¶ the distribution of Z, because when no signal is present it does not depend on k. The case M = 9 is again very interesting because it offers us the possibility of comparing the experimental distribution with p(z) and p(Z), thus checking that Z really follows the second and not the ¶ In fact, we do not need to remove k 0 (or any other presumed signal) necessarily before we compute the output distribution, because at most it is only one point in 2 16 . first -see Figure 2.a. For higher values of M the two probability densities converge and become almost indistinguishable from one another (Figure 2.b). In both instances, however, the coincidence between theoretical and experimental distribution is remarkable. Moreover, the explicit form of p(Z) is very useful, not only in order to compare it with the experimental one, but to fix a threshold λ0, which shall help us in the task of deciding whether a given crossing has statistical significance not. We will calculate the error of the first kind , or false-alarm probability, Q0, as a function of λ0, and then we shall set the upper bound depending on the number of false alarms (i.e. mistakes) we can afford, using the well known Neyman-Pearson criterion (Helstrom 1968): Table 1 shows how λ0 varies both with respect to the value of Q0 and the number of processed data blocks, M . The signal's height, in units of the plots in Figure 1, is 0.473, 0.684 and 0.595 for M = 18, 36 and 72, respectively. It is therefore above threshold if M = 72 with false alarm probability Q0 = 10 −5 , and both if M = 36 and M = 72 with false alarm probability Q0 = 10 −3 . Even so it cannot be clearly told from other random fluctuations, as we see in the Figure. This is because it is very weak, of course. We shall come back to the discussion of the significance of these thresholds below.

A leaking signal
We are going to start this section considering the effects that the presence of a general frequency signal in the data may produce in the results we have exposed in the preceding sections. In particular, we shall study the new statistical properties of the random variables yα(k, ϕ), that determine the characteristics of R and I, and consequently ofz.
So, in the following we shall relax the condition (9) by introducing the real parameter ǫ0 whose effective consequence is that xα(n) is no longer independent of α, which appears in the form of an accumulative phase shift whenever ǫ0 is different from zero: While it is true that the frequency remainder also lowers the maximum filter output, due to the spectral leakage of the signal, it is nevertheless the block dependence that damages the filtering procedure, because it is responsible of the sinc-like behaviour of the mean values of R(k0) and I(k0), when considered as functions of M : This fact turns the search strategy not so robust as desired, in the sense that for a given ǫ0 other than zero, there always exists a value M0 (∼ 1/ǫ0) for M , above which the SNR decreases noticeably. The frequency band where the analysis method works efficiently thus decreases with the number of averaged blocks, a very undesirable feature. We now investigate how this problem can be addressed.

Phase-varying filter
As already stated, since the origin of the problem is a carried over phase, we shall solve it introducing a new block-dependent filter with one more parameter, ǫ: gα(n; k, ϕ, ǫ) = 2T N S(k; N ) cos(2πkn/N + ϕ + 2παǫ).
with the purpose to compensate for the phase shift. Starting from this new gα(n; k, ϕ, ǫ), we can calculate the value of each yα(k, ϕ, ǫ), where the following notation has been used: Through a definition formally identical to (27), we shall establish z(k, ϕ, ǫ). Once again it is possible to obtainφ using a local-maximum condition, like that in (30), .
The value ofφ leads now to the following expression for z(k,φ, ǫ), a relationship that involves a new quantity, ≈ u (k; ǫ), which formally is also a DFT, The template for ǫ, just like in the case of the frequency grid, will be dictated by the convenience of the use of the FFT algorithm in the computation of ≈ u (k; ǫ). We shall therefore estimate ǫ0 within the following discrete rank of values of ǫ: where, in principle, M ′ , by the way an exact power of 2, must not necessarily coincide with M . M ′ has to be greater than M if we do not want to waste available information, but on the other hand, it seems that larger and larger values of M ′ should produce an endless increment of precision in the estimation of ǫ0. Nevertheless, as we will show in the next section, M ′ should be kept as low as possible.
Equation (85), together with the span condition (86), bears a considerable formal resemblance with the so called zoom transform (Yip 1976;Hung 1981;de Wild et al. 1987), in which the twiddle factor is now identically equal to one. It must however be stressed that, in the present context, ours is an interpolation formula rather than a frequency resolution algorithm. Our formalism can thus be naturally extended with the purpose to refine the spectral resolution, simply taking M ′ = M , i.e., data sets of N × M ′ points, where all possible values of q must be considered, for a selected choice of frequency bins, k.
be even worse, because we have no guarantees thatq/M ′ corresponds to the best approach to the actual ǫ0: the noise can mislead us into an inaccurate value of the signal frequency, leaving thus the true ≈ u (k; ǫ0) among the discarded ones. Instead of constructing an estimator for S(k; M ) in the hope that none of the previously stated possibilities really takes place, what could lead us again to a filtering procedure too sensitive to the signal's peculiar properties, we are going to choose a democratic estimate S(k; N ): perhaps it will not be so accurate as it could, but it will not show appreciable differences in its performance depending on the actual frequency of the GW.
We define S(k; N ) through an expression that closely resembles that in equation (58), where theũα(k) have been substituted by ≈ u (k; q/M ′ ), and the sums do not contain the term where the signal is supposed to be located, ≈ u (k;q/M ′ ).
Once more we replace S(k; N ) by S(k; N ) in the definition ofz, in order to get a new random variable Z which we can compute using only experimental data. This quantity inherits two characteristic traits from the way we estimate the spectral density of the noise.
On the one hand, if the signal is large enough the filter output may have a saturation limit, which will depend upon the particular values of some parameters, such as M or M ′ . This means that Z will not surpass a certain threshold, even if the amplitude of the signal increases indefinitely. The reason for that behaviour can be found in the fact that when the signal is much more intense than the noise, what becomes really difficult to assess is not the presence of the former but the properties of the latter. So in these cases S(k; N ) will overestimate S(k; N ). It must however be stressed that the whole effect results in a change in the value of ρ0, and thus does not actually set an upper bound in the SNRρ. On the other hand, when no signal is present at a particular frequency, S(k; N ) will underestimate S(k; N ), since we do not useq when computing it, and ≈ u (k;q/M ′ ) has the biggest modulus among all the ≈ u (k; q/M ′ ).
To compute the overall probability density of Z(k) is very difficult, both in a single step (even if no signal is present) or in two steps, like in Section 4.1, since the auxiliary random variable W(k) defined like in (60) is no longer independent of z(k). We shall thus obtain it by a different method. We introduce a tentative p(Z) inspired by the limiting probability density p(z), because when M is large enough both must coincide: The new parameter w, which condenses all the differences between p(z) and p(Z), measures in some suitable sense the bias of the spectral estimator S(k; N ), i.e., This point of view is somewhat vain since we are not in a position to compute < S(k; N ) > theoretically, as just stated.
The value of w, however, can be estimated from the filter output itself, using for instance the relationship (91), replacingz by √ wZ, and the statistical mean by an average over the filter output: The procedure we shall use to compute (100) also reminds us that Z(k) can be obtained from the experimental time series, irrespective of its probability density. We set the value of the w quantity a posteriori , once the filtering process has finished. In fact, the functional form of p(Z) has its very origin in the comparative study of the actual distribution Z(k) and the probability density of reference, p(z), for different values for the free parameters. We pursue these ideas in the next section.
6.2 A single filtering process  Let us begin analysing a stream of about one day of the Explorer data with the layout used in Section 4.3, i.e. N = 131072 and M = 36 M ′ = 64. In fact, we will choose exactly the same time series, starting on August 3rd of 1991, in order to be able to compare the non-leaking and the leaking methods. So we get the results shown in Figure 3.a, where we have also externally introduced a signal, with similar characteristics with respect to the previous example. Its amplitude is now slightly bigger, h0 = 3×10 −23 , i.e., SNR is about 1, because the new method is slightly less sensitive than the non-leaking one, as we have shown. And, of course, the signal spreads across different frequency bins: k0 = 50918 and ǫ0 = 0.1.
A plot of the distribution of Z(k) is displayed in Figure 3.b, where it is contrasted with p(z) in (89), the theoretical probability density, and with p(Z) once w was computed following the prescription shown in (100), the corrected one. The agreement of the latter with the experimental probability density is again remarkable. It is convenient to point out that when the number of processed blocks increases, the value of w rapidly approaches one, thus becoming an irrelevant parameter. As a matter of fact, Figure 4, where we present the same procedure with the same data, but extending the processed time to two days, shows that p(z) is then a sufficiently accurate expression for the probability density of the actual filter output.
In the present case we can also compute the error of the first kind in terms of the threshold λ0: and to invert this relationship, By way of example, the threshold levels in units of the graphs in Figures 3 and 4 are 0.96 and 0.68, respectively, for a false alarm probability Q0 = 10 −5 . In either case the signal is clearly above these thresholds, as it has respective heights of 1.12 and 0.83. From here we can rather accurately determine ǫ0, too: we findq = 6 when M ′ = 64 (i.e., ǫ 0,estimated = 0.094), andq = 13 when M ′ = 128 (i.e., ǫ 0,estimated = 0.101) for a real value of ǫ0 = 0.1.

OUTLOOK
The long term operation of cryogenic GW detectors opens the possibility of looking for long duration GW signals in the data generated by them, since signal-to-noise ratios are enhanced by the availability of long integration times. Monochromatic, as well as stochastic signals belong in this category, though the latter require data from two or more independent antennae. In this paper we have addressed the problem of the design of suitable algorithms to single out possible monochromatic signals, coming from any direction in the sky, in the background of the noisy data produced by a cylindrical bar.
This is not such a simple problem, due to a variety of reasons: computers are not arbitrarily powerful, detector duty cycles are not 100%, data quality is not homogeneous, Doppler shifts distort monochromaticity, etc. Although some of the characteristics of the data are peculiar to the detector system, there are a number of procedures which should be quite generally usable. This paper is concerned with the problem to set up filtering algorithms which enable the selection of candidate signals on the basis of threshold crossing criteria of the filtered data. To this end we have considered banks of filters which have a sinusoidal form but which enable signal phase estimation. We have determined the pdf's at filter output and thence consistently established the probability of crossing a given level. We have also designed suitable estimators of the noise spectral density which take into consideration the possibility that the signal be in one or more of the FFT frequency bins for the complete duration of the analysed data.
These procedures have been checked by means of simulations on top of real data generated by the Explorer detector of the Roma group in 1991, and they work quite well: the theoretical predictions on the improvement in SNR as longer stretches of data are processed remarkably coincide with those observed in the real data analysis. They thus appear promising, and we would expect them to be useful to analyse other resonant detector data, whether cylinders or the future projected large spherical antennae, and also the large scale interferometers currently under construction, with suitable adaptive modifications.
As we have seen with the simulations in this paper, the Explorer detector comfortably sees amplitudes of 10 −23 , and therefore one can expect it to be sensitive to signals a few times 10 −24 -see also . These are still rather high values for the expected amplitudes of signals of astrophysical origin, so threshold crossing criteria must be established having this in mind: if we set a very low false alarm tolerance then chances of missing the real event will grow, whereas if we set it rather low, the real signal will be treated on the same footing as random noise.
Real signals however have characteristic Doppler patterns, whose correct assessment should provide a more sound selection of candidate signals. In a forthcoming article we plan to apply the methodology here developped to a systematic analysis of the above mentioned Explorer detector data over a long period of time, and to extend it to also include Doppler shift effects in the data. The very algorithms of section 5 will be a powerful tool in this respect, if implemented under the perspective of the zoom transform (Yip 1976), to enhance the frequency resolution within the spectral neighbourhood of pre-selected candidate lines.