An experiment to test the discreteness of time

Time at the Planck scale ($\sim 10^{-44}\,\mathrm{s}$) is an unexplored physical regime. It is widely believed that probing Planck time will remain for long an impossible task. Yet, we propose an experiment to test the discreteness of time at the Planck scale and estimate that it is not far removed from current technological capabilities.


Introduction
Optical clocks using strontium 87 Sr are among the most accurate in the world. The time elapsed between two of their ticks is about 10 −15 s (the inverse of strontium frequency) with a precision of 10 −19 [1]. Physical phenomena that probe much smaller characteristic timescales have also been measured. For instance, the lifetime of the top quark is 10 −25 s. Such a result is obtained experimentally from a statistical analysis, where the short duration of the lifetime is compensated by a large number of events. At the theoretical level, physicists consider even shorter scales: in primordial cosmology, the inflation epoch is believed to have lasted 10 −32 s. Using a cosmological model, [2] argues that the precision of recent atomic clocks already sets an upper bound of 10 −33 s for a fundamental period of time.
Planck time is a far smaller timescale. We recall that the Planck time is defined as where G is Newton's constant, the reduced Planck's constant and c the speed of light. It can seem an impossible task to probe time at the Planck scale. However, the example of the lifetime of the top quark shows that it is possible to overtake clock accuracy limitations by several orders of magnitude using statistics. Here, we examine the following question: if time behaves differently than a continuous variable at the planckian scale, how could the departure from this behaviour be inferred experimentally? To answer this question, we assume that proper time differences take discrete values in multiple steps of Planck time, and devise a low energy experiment that would detect this effect. This work is motivated by recent experimental proposals to detect the non-classicality of the gravitational field by detecting gravity mediated entanglement (GME) [3,4,5,6,7] and the production of nongaussianity [8]. Since the quantum gravity regime of particle physics may be practically impossible to probe, it is intriguing that these low energy experiments are not too far removed from current capabilities. Instead of accelerators, the suggestion in these proposals is to quantum control slow moving nanoparticles or use a Bose-Einstein condensate. Thus, quantum gravity phenomenology provides a further motivation to the current push to develop technologies for setting mesoscopic masses in path superposition [9,10,11].
In the above mentioned proposals, the effect being leveraged is an interesting interplay between planck mass, which is a mesoscopic quantity, and planck time. In particular, in the experimental setup to detect GME proposed in [3], two masses with an embedded magnetic spin are set in a spin-dependent path superposition and become entangled as a result of their gravitational interaction. Once the superposition is undone, the spins are entangled and spin measurements can reveal this entanglement. In [12] it was shown that the relative quantum phases δφ can be derived within general relativity as where m is the mass, and δτ is the difference in proper time experienced by one mass in the two branches.
The effect is most pronounced when δφ approaches unit. Note that then, if in addition we take m ∼ m P , we have that δτ ∼ t P . In other words, if the experiment was operating in this regime it would be probing proper time differences at the planck scale. 1 In [13], it was pointed out that if δτ can only take discrete values with steps of planckian size then this would in principle have observable consequences in the GME setup.
The current work is a quantitative investigation into this idea and it presents two considerable improvements. First, we propose a new experimental setup that significantly easier to implement. The protocol in [13] required setting two masses in superposition and measuring their entanglement as a function of time. However, since entanglement plays no special role in the effect, the experiment we propose has a single mass in superposition. Second, in this work, we present an order of magnitude analysis of the experimental requirements for detecting the effect. This includes considerations of environmental noise, errors in measurements and control of the experimental apparatus and sources of decoherence. We identify a set of experimental parameters, reported in Table 1, from which we conclude that the detection of a fundamental time discreteness may not be too far removed from current technological capabilities.
The plan is the following: 2 we present the experimental setup; 3 we introduce the hypothesis that proper time differences are discrete at the Planck level; 4 we deduce the constraints on the experimental parameters to make this discreteness detectable; 5 we suggest a set of reasonable parameters that fulfil the constraints; 6 we complete the analysis by considering decoherence; 7 we discuss the hypothesis.

Experimental setup
The proposed experimental setup is depicted in figure  1. A spherical nanoparticle of mass m with embedded magnetic spin is dropped simultaneously with a second mass M . The mass m is then put into a spindependent superposition of paths by the application of a series of electromagnetic pulses. This technique was proposed in [3,14]. In the branch of closest approach, m and M are at a distance d, in the other, they are at a distance d + l. The superposition is held at these distances for a time t as measured in the laboratory frame. While the two masses free fall, they For a time tacc, an inhomogeneous magnetic field is applied that sets a mass m with embedded spin in a superposition of two paths, at a distance d and d + l, respectively, from another mass M . The masses are in free fall for a time t, as measured in the laboratory, after which the procedure is reversed and the superposition undone. During this time t, the two trajectories accumulate a different phase due to the gravitational interaction with M .
interact gravitationally. The two quantum branches in the total state evolve differently, accumulating a relative phase. After the superposition has been undone, this phase is visible in the state of the spin of the mass m.
Let us see this in detail. The quantum state of the mass m is given by its position in the apparatus and the orientation of its embedded spin. There will be three relevant position states 2 |L , |C and |R , respectively left, centre and right. For the spin, we use the canonical basis, |↑ and |↓ , in the z-direction. The mass m is prepared at t 0 in the central position with the spin in the positive x-direction: An inhomogeneous magnetic field is then applied to the mass m, entangling its position with its spin so that at time t 1 the state is The particle is then allowed to free-fall for a time t. During this time, it interacts gravitationally with the mass M . The displacement of the masses due to their gravitational attraction is negligible. The two states |L and |R are eigenstates of the hamiltonian and each acquires a phase proportional to the newtonian potential induced by M . So at time t 2 the state is where At this point, another inhomogeneous magnetic field is applied to undo the superposition. The final state of the particle is, up to a global phase, where the relative phase δφ is given by Information about the gravitational field is now contained in the state of the spin, which in turn can be estimated from the statistics of spin measurements. Concretely, we consider a measurement on the spin of the particle along the y-direction Born's rule gives the probability P + of finding the spin in the state |+i : where we compute δφ as a function of m, M, d, l and t through equation (9). This equation for the probability is a theoretical prediction of both semiclassical gravity (assuming m does not collapse) and linearised quantum gravity in this regime. Experimentally, the probability can be measured by the relative frequencies in collected statistics. The experiment is repeated N times keeping the experimental parameters fixed. If the outcome |+i is recorded N + times, the frequency is then the experimentally measured value of the probability. This procedure can be repeated for different sets of experimental parameters to verify the functional dependence of p + to these. In what follows, we propose an experiment that can detect a statistically significant discrepancy between P + and p + . This discrepancy would signal a departure from the behaviour expected in the low-energy limit of linearised quantum gravity and other theories that predict (11). The above experimental setup is similar to that proposed to detect GME in [3], with the main difference that for our purpose we only require one mass, not two, in a superposition of paths. It is thus conceptually more similar to the celebrated Colella-Overhauser-Werner (COW) experiment [16,17]. However, the task we have set ourselves here and the method to achieve it, goes much beyond showing that gravity can affect a quantum mechanical phase and induce an interference pattern. To detect a potential discreteness of time, we need a more sensitive apparatus, and so the gravitational source M will need to be much weaker. In our case, M is not the Earth, but a mesoscopic particle, essentially a speck of dust.

Hypothesis: Time Discreteness
While the newtonian limit of linearised quantum gravity is sufficient to compute the phase difference δφ, it can also be understood in general relativistic terms [12,13]. The mass M induces a Schwarzschild metric which dilates time differently along each of the two possible trajectories of m. Then, equation (9) can be recast as where δτ is the difference of proper time between the two trajectories, given by Now, it is widely believed that the smooth geometry of general relativity should be replaced, once quantised, by some discrete structure. In particular, we may expect time to be granular in some sense. In which sense precisely, we do not know. However, since δτ admits a straightforward interpretation of a covariant quantum clock, it makes a good candidate to reveal discrete features of time. Thus we make the following hypothesis: δτ can only take values which are integer multiples of Planck time t P . That is, (14) is modified to: Additional motivation for the hypothesis and possible alternatives are discussed in section 7. For now, it can be taken just as the simplest implementation of the idea that time is discrete at a fundamental level, similar in philosophy to the idea that everyday-life matter is not continuous, but instead made of atoms. In the rest this work, we devise an experiment to detect this discreteness, and estimate its feasibility requirements. Equation (15) is still incomplete and we need to posit a functional relation between the level n and the parameters M, d, l, t. We rewrite equation (14) as where and we take n to be given by the floor function That is, n is the integer part of the dimensionless quantity t/β. The main lessons of our results do not depend on the specific choice (18) for the functional dependence between t/β and n. Other modifications of the continuous behaviour in (14), so long as they display features of planckian size, could be probed by the experiment. We have The consequences of this hypothesis are revealed in the measured probability p + of equation (12). If time behaves continuously, p + , as a function of time t/β will fit the smooth (blue) curve in figure 2, given by If the hypothesis holds, the observed profile for the probability will follow that of the orange step function in figure 2, given by To test the hypothesis, the strategy is thus to plot experimentally the curve p + (t/β). Observing plateaux would be the signature of time-discreteness. When δτ is smooth as in equation (16), the probability depends smoothly on t/β, while if δτ is discrete as in equation (19), there are discontinuities. We have taken the value of m = 10 −2 mP . The experimental parameters shown in table 1 would produce 100 data points scanning the range of t/β depicted here, with a sufficient resolution to decide which of the two curves is realised in nature.

Ensuring Visibility of the Effect
Each experimental data point for p + (t/β) is obtained from computing the statistical frequency of the outcome |+i . Point by point, a scatter plot of p + against t/β will be obtained. We must choose the experimental parameters so that the difference between P + and P h + can be resolved. This imposes requirements on the minimal precision of the experimental apparatus and on the maximal permissible gravitational noise in the environment.

Visibility of the Vertical Axis
The uncertainty ∆p + for the probability p + after N runs results from using finite statistics and is of the The vertical step α between the plateaux is given by (23) We assume that m m P , consistent with the fact that it is hard to put a large mass in a superposition. 3 The above expression simplifies to So the steps are most visible when t β m m P 1.
Then the expression simplifies to Requiring that the probability uncertainty is an order of magnitude smaller than the vertical step, ∆p + < 10 −1 α, we find the constraint s We see that a larger mass m means that fewer runs N per data point are required, which implies a shorter total duration T tot of the experiment. Indeed, since plotting p + (t/β) requires N runs per data point, each run requiring at least a time t, a lower bound for the total duration of the experiment is 3 The case for m ∼ m P or m > m P can also be considered. The analysis will be different as the approximation (24) will not hold. The effect can still be in principle detected for these cases, but will be harder to implement experimentally because larger masses are harder to put in a superposition.
where N dp is the number of data points. Thus, the constraint (27) can be restated as This constraint imposes a trade-off between the time required to resolve the discreteness and the mass that has to be in superposition. It counter-balances the fact that it is harder to achieve quantum control of a large mass.

Visibility of the Horizontal Axis
The uncertainty in t/β is found via the the standard formula for the propagation of uncertainty and can be expressed as where By assumption (18), the width of the plateaux is 1. To place several data points on each plateau, we require the typical uncertainty to be an order of magnitude smaller, i.e. ∆(t/β) < 10 −1 . We thus impose the constraint on the experimental parameters. Note that a given U determines the highest value of n = t/β for which the discontinuities can be resolved.

Gravitational Noise
There is no analog of a Faraday cage for gravitational interactions, so influences by other masses will also contribute to the accumulated phase δφ. Since the experiment we are considering is in a sense an extremely sensitive gravimeter, these would need to be taken carefully into account. We distinguish between 'predictable' gravitational influences and 'unpredictable' gravitational influences, i.e. gravitational noise. The latter type will dictate the degree of isolation required for a successful realisation of the experiment, adding another visibility constraint, while the former type can be dealt with by calibration.
The presence of unexpected masses in the vicinity of the apparatus may disturb the measurement. It will contribute to the proper time dilation by an amount η, modifying (21) to Getting a single data-point requires N drops, and for each drop, the perturbation η may be a priori different. However, it should be small enough so that it does not make the probability P h + jump to another step, i.e. η is a negligible noise if Of course, η is a random variable over which the control is limited. To a first approximation, the condition (34) can be implemented over the N drops by requiring ∆η < 10 −1 t P .
For instance, the gravitational noise induced by the presence of a mass µ at a distance D l, d is at most Thus, we get a fair idea of how isolated the apparatus should be with the condition The ratio is a measure of the impact that a mass µ has on the visibility of the discontinuities if it is allowed to move uncontrollably as close as a distance D away from the experiment. Thus, we end up with the following constraint This equation is a requirement on the control of the environment necessary to resolve the discontinuities. Shorter superpositions are less sensitive to the gravitational noise. Above, we took into account the effect of a single mass µ. This not sufficient to guarantee that there will not be a cumulative effect from several masses around. However, note that if these masses are homogeneously distributed, their contributions average out.
The 'predictable' type of gravitational influences are systematic errors arising for example from the gravitational field of the Earth, the Moon, and the motion of other large bodies, such as tectonic activity or sea tides, but also from small masses that will unavoidably be present in the immediate vicinity of the mass m, such as the experimental apparatus itself and the surrounding laboratory. Given the extreme sensitivity of the apparatus, it will likely not be possible to make all these gravitational influences satisfy (39). However one can calibrate for the contribution of a mass µ at distance D if it moves slowly with respect to the time N t that it takes to collect a data point 4 with v the speed of the mass. Another possibility that can be calibrated for is if the mass is not moving slowly but the uncertainty in its position is small with respect to D (for instance, a moving mechanical part or the Moon).

Balancing act
The three experimental constraints identified in the previous subsection are repeated below.
We now proceed to identify a set of reasonable parameters that satisfy the constraints. Our series of assumptions is an educated guess based on our understanding of current technological trends.
1. Any of the parameters M , d, l and t could be modulated to scan a range of t/β. Since t/β is most sensitive to changes in d (quadratic dependence), we assume the modulation of d, keeping M , l and t fixed.
2. The total duration of the experiment is about a year T tot ∼ 10 7 s.
3. The plot requires about a hundred of data points to be distributed over ten plateaux 4 An example of a calibration procedure is as follows. Let us assume that the different values of t/β are obtained by changing d while keeping M , l, and t fixed (as considered in the next section). The mass µ will contribute a constant phase φ B , which we can estimate by running the experiment without M . So long as the masses are slow moving, it suffices to rotate the measurement basis to rather than {|±i }.
4. Experimentally, the maximal distance between the two branches of the superposition cannot be very large, and so we assume From these first assumptions, the system of inequalities simplifies to The uncertainty U , defined by equation (31), depends on the precision in t, M , d and l. With the assumption l d its expression simplifies to (50) Then, the [Horizontal] inequality implies that t, M , d and l will have to be controlled better than 1 part in 100.

5.
It is reasonable to expect that the uncertainty U will be dominated by the uncertainty in the superposition size l, thus, 6. We assume possible to control the size of the superposition to the scale of a few atoms, i.e. ∆l = 10 −9 m.
7. From the above two points we have a lower bound for the value of l. Taking l larger, would only make the experiment harder because of decoherence and gravitational noise. We thus take which satisfies the horizontal constraint, allowing to resolve the first 10 steps.
We have now solved the horizontal constraint and fixed l. The remaining constraints evaluate to All three equations suggest to take t as small as possible. Nonetheless, this cannot be too short because the superposition is created by a magnetic field B that separates the branches at a distance l. This process requires some time t acc , which is bounded from below by the highest magnetic field B max that can be created in the lab. Concretely 5 where µ B is the Bohr magneton (µ B ≈ 10 −23 J.T −1 ).
8. t should be at least as long as t acc , say 9. Taking B max ∼ 10 2 T, which is the value of the strongest pulsed non-destructive magnetic field regularly used in research [21], we get in SI units 10. Considering the difficulty to put a heavy mass in superposition, we can minimise both t and m under the vertical constraint of (54) and equation 11. Considering a priori the difficulty to isolate the system from external perturbations, the noise inequality fixes the minimal upper bound for A, i.e. we want to tolerate perturbations as high as This threshold is very sensitive. To give an example, it corresponds to the gravity induced by a bee flying 230m away. Such a high control might only be attainable in space, where cosmic dust particles, with typical mass of 5µg [22], would need to be kept 4m away from the masses. We are thus left with one last inequality which reads, in SI units, 5 We assume the masses are made of a material that allows neglecting diamagnetic effects. If diamagnetism cannot be ignored, one has to resort to a more complicated scheme of pulses, inverting the direction of the magnetic field gradient at specific intervals as detailed in [18], or inverting both the direction of the gradient and the spins as proposed in [19]. Alternatively, one can use a different method of wavepacket separation, like that detailed in [20]. This corresponds to the lower bound for the range that d will scan, corresponding to t/β = 10. The value t/β = 1 provides an upper bound of d ≈ 54 cm. Note that the assumption made above that ∆d/d, ∆M/M 10 −2 is indeed reasonable.
Casimir-Polder. So far, we have not taken into account the Casimir-Polder (CP) force between the two masses. The modification of the vacuum energy between two perfectly conducting, parallel discs of area a a distance d apart [23] results in a force F CP = cπ 2 240d 4 a. Taking this force as an overestimate of that between two spherical dielectric particles of cross-sectional area a a distance d apart, we see that the CP force is at most a million times weaker than the gravitational force and can thus be neglected.

Uncertainty on m.
A small shift δm on the mass m adds a phase difference = δm/m P · t/β , which in turn causes a shift δP in the probability. Since m m P and t/β < 10, then 1 and the shift is to first order δP ≈ 1 2 . The uncertainty in m does not affect the visibility of the probability axis if δP α, i.e. if δm/m 2/ t/β . This last condition on m means that the mass m should be known to one part in 100, which is easily reachable. This concludes our derivation of a set of parameters that satisfy the constraints of the previous section and, thus, allow to probe planckian features of time. The values are summarised in table 1. As a corroboration of the analysis, the experimental plot is simulated for these parameters in figures 3 and 4. There, we see how the effect becomes visible when the gravitational noise and the uncertainty on the experimental parameters satisfy the constraints derived above.

Maintaining Coherence
A mass in superposition of paths will interact with the ambient black body radiation and stray gas molecules in the imperfect vacuum of the device. As the photons and molecules get entangled with the position degrees of freedom of the mass, the coherence of the superposition is lost and the phase cannot be recovered by observing interference between the two paths.
These unavoidable environmental sources of decoherence are well studied both theoretically and experimentally [20,10,25]. Gravitational time dilation can   also be a source of decoherence for thermal systems [26], but requires much stronger gravitational fields than considered in this experiment. We assume the experiment will be performed with a nanoparticle of mass m = 3 × 10 −10 kg, radius R = 30 µm. For the formulae appearing in this section we refer the reader to [20].

Black-Body Radiation
The typical wavelength of thermal photons (≈ 10 −5 m at room temperature) is much larger than l, thus spatial superpositions decohere exponentially in time with a characteristic time which is sensitive to the superposition size l. The factor Λ bb depends on the material properties of the mass as well as its temperature and that of the environment. If the environment and the mass are at the same temperature T then the factor is where ζ denotes the Riemann zeta function and is the dielectric constant of the nanoparticle's material at the thermal frequency. We take = 5.3 like that of diamond [27] for the purposes of this estimation. Plugging in the the radius of 30 µm of the masses under consideration and the superposition size 10 −1 µm, we have A coherence time of about 1 s, one order of magnitude above t of table 1, will require the temperature to be below 4 K.

Imperfect vacuum
The thermal de Broglie wavelength of a typical gas molecule (≈ 10 −10 m for He at 4K) is many orders of magnitude below the superposition size l considered here, thus a single collision can acquire full whichpath information and entail full loss of coherence. The exponential decay rate of the superposition is in this case independent on the size l of the superposition, with a characteristic time in a gas at temperature T , pressure P of molecules of mass m g . Assuming the gas is entirely made of helium, and setting the highest possible value for the temperature according to the previous section, we get τ gas ≈ 10 −17 s P/Pa .
Thus a coherence time of 10 t = 1s requires a pressure of 10 −17 Pa. This is a regime of extremely low pressure and may present the most serious challenge for any experiment that involves setting masses of this scale in path superposition. To put things in perspective, pressures of the order 10 −18 Pa are found in nature in the warm-hot intergalactic medium [28], while the interstellar medium pressure is at the range of 10 −14 Pa [29]. On the other hand, pressures as low as 10 −15 Pa at 4 K have been reported since the 1990's in experiments employing cooling magnetic traps [30,31]. In a similar context to ours, the contemporary GME detection proposals quoted above require pressures of 10 −15 Pa at 0.15 K [3]. Finally, the cryogenic requirements found in this section can be relaxed if the path superposition can be achieved faster. From equations (55) and (56), if a stronger magnetic field can be used this will require shorter coherence times.

Discussion of the hypothesis
At first sight, the hypothesis δτ = n t P (15) mimics the naïve picture of a tiny clock ticking at a constant rate, with a lapse t P . This simple physical picture of the quantum mechanical phase as a sort of intrinsic "clock" ticking at planckian time intervals is appealing in its simplicity and does not depend on any particular model of quantum gravity. Thus, in our opinion, it is on its own right worth being looked at.
Whether this hypothesis is backed by a physical theory of time is unclear. In the well corroborated fundamental paradigms of general relativity and quantum mechanics, time is modelled as a continuous variable. However, in a more fundamental theory like quantum gravity, yet to be established, one can reasonably expect a modification of the notion of time at planckian scale. We discuss two main avenues by which the continuous time can become discrete: A. Instead of a smooth spacetime, consider it instead an effective description on large scales, that emerges from an underlying discrete lattice.
B. Promote time to a quantum observable with a discrete spectrum.
A. Most straightforwardly, (15) can be taken prima facie to arise from a kind of classical time discreteness. Assuming that the notion of proper time τ of general relativity becomes discrete in a linear sense, with regular spaced planckian time intervals, then also differences of proper time δτ will display a similar behaviour, from which (15) follows. This assumption is made for instance in the program of Digital Physics [32], which advocates that space may be nothing but a grid.
Of course, such a 'classical' discreteness would manifestly break Lorentz invariance. It might be already possible to set upper bounds on the discreteness of time from the limits set on Lorentz invariance violations by the study of the dispersion relations of light [33,34,35,36].
Before discussing possible implications of quantum theory, a comment on the intermediate case of a classical but stochastic spacetime. For instance, if spacetime can be described by a single causal set, stochastic fluctuations of planckian size in proper times are to be expected [37,38,39]. Because of the statistical nature of the time measurement proposed here, finding a continuous behaviour for δτ would not necessarily exclude the possibility of a classical discreteness. It could just be masked by stochastic fluctuations.

B.
Turning to the quantum theory, the discreteness of time may appear as the discreteness of the spectrum of some time operator. Contrary to general belief, Pauli's argument [40] has not ruled out the possibility of a time-operator but rather stressed the subtlety of its definition [41].
There are two main candidates for being the relevant time observable here: the proper time interval τ in each branch and the difference of proper time δτ between the branches. Then in both cases the question of which spectrum is to be expected should be answered.
Equation (15) can be regarded as the assumption of the linearity of the spectrum. For comparison, this is very different from the energy spectrum of the hydrogen atom E n ∝ −1/n 2 but it is very similar to that of the harmonic oscillator E n ∝ n. If the spectrum of τ is linear, then so is the spectrum of δτ , which is what we assumed in the main analysis with equation (15). Thus, it does not really matter in this case, whether it is τ or δτ which is taken as the relevant quantum observable. On the contrary, for a non-linear spectrum, this question is crucial. As said earlier, the assumption of linearity is natural in the sense that it mimics the ticking of a clock, but it is not really backed so far by any theory of quantum gravity.
In Loop Quantum Gravity (LQG) the spectrum of the length, area and volume operators are famously discrete [42]. Discreteness of time may arise in a similar fashion from this theory, although nothing has been proven yet. 6 The hypothesized linear behaviour is similar to the spectrum of the area operator in LQG [45] A j = 8πγl 2 P j(j + 1), j ∈ N/2, where γ is a fundamental constant called the Immirzi parameter. There are indications that length has a spectrum that goes as a square root progression in j [46]. Geometrically, we would expect time to behave similarly to a length. In such a case, it will make all the difference whether the square-root behaviour applies to the proper time itself or the difference of proper time We first analyse the consequences of equation (70) on the visibility of the plateaux. We work in Planck units and take l d as in the main text, although the same result can be obtained without this assumption. The proper times τ far and τ close of the branch in which M and m are a distance d + l and d apart are given in terms of laboratory time according to general relativity by These are very large compared to the Planck time, as we are in the weak field regime and t cannot be smaller than the period of the sharpest atomic clock. Let's now impose the discretisation (70) Equation (15) is thus replaced by The condition l d implies that k n, so that the equation above simplifies to So in this case, a square-root behaviour for the spectrum of τ leads to a linear behaviour for δτ . Unfortunately, the factor of √ n in the denominator means that different values of δτ are exceedingly close to each other, making the experiment impossible in our proposed setup.
We now consider the case (71). We have so that For small values of t/β, the plot of P h + is the same as the one of P h + , studied in the main text. For larger values of t/β, both the width of the plateaus and the steps between them are smaller. Thus, the detection of such a discreteness is of similar difficulty so long as t/β < 10; see figure 5. We take m = 10 −2 mP . When δτ takes continuous values, the probability is directly proportional to t/β. When δτ = n tP , as considered in the main text, the discontinuities have fixed size. If, however, δτ = √ n tP , as motivated from LQG in this section, the discontinuities rapidly shrink as t/β increases.

Conclusion
In this article, we have devised an experiment that would probe a hypothetical granularity of time at the Planck scale. We have also carried out an order of magnitude analysis of the experimental requirements. First, we have determined a set of constraints that would ensure the visibility of the plateaux in the plot of the probability p + (t/β). These constraints are expressed as a set of inequalities on the experimental parameters. Second, based on current claims in the experimental physics literature, we have shown that there exists a reasonable range of parameters that satisfy the constraints. The obtained values are gathered in table 1. Finally, we have determined the temperature and pressure conditions required to avoid too fast decoherence.
We surprisingly conclude that the proposed experiment could be a feasible task for the foreseeable future. In particular, we estimate that it is of a difficulty comparable to that of contemporary experimental proposals for testing the non-classicality of the gravitational field. Nevertheless it remains difficult, and will require pooling expertise in adjacent experimental fields.
The success of this experiment requires a careful consideration of the uncertainty on the induced gravitational phase δφ, estimated through a probability p + . This uncertainty must satisfy We see that the Planck mass acts as a natural scale for the effect to become prominent: smaller masses would require higher precision in estimating the probability p + .
The possibility of probing planckian time without involving extremely high energies may be a disturbing idea to many physicists. However, the history of physics shows examples where scientists have gained knowledge at a physical scale that was widely believed to be unreachable with the available technology at the time. A first example is when Einstein proposes a way to measure the size of atoms by observing the brownian motion of mesoscopic pollen grains [47]. Another example is when Millikan shows that the electric charge comes in discrete packets, and measures the charge of the smallest packet (the electron) [48,49]. Again, such a feat was realised through the observation of the mesoscopic motion of charged drops of oil. In both cases, as in our proposal, the scale of discreteness was reached through mesoscopic observables thanks to two leverage effects: an algebraic game involving very small or very big constants and a statistical game involving the collection of many events.
The importance of realising the proposed experiment lies primarily in the groundbreaking implications of potentially discovering a granularity of time at the Planck scale. A negative result could also have significant implications, guiding fundamental theory. Finally, an easier version of the experiment with relaxed constraints might remain of significant interest, setting new bounds on the continuous behaviour of time in this unexplored, but soon accessible regime.