Standard Guide for Measurement of Particle Size Distribution of Nanomaterials in Suspension by Photon Correlation Spectroscopy (PCS)

SIGNIFICANCE AND USE
5.1 PCS is one of the very few techniques that are able to deal with the measurement of particle size distribution in the nano-size region. This guide highlights this light scattering technique, generally applicable in the particle size range from the sub-nm region until the onset of sedimentation in the sample. The PCS technique is usually applied to slurries or suspensions of solid material in a liquid carrier. It is a first principles method (that is, calibration in the standard understanding of this word, is not involved). The measurement is hydrodynamically based and therefore provides size information in the suspending medium (typically water). Thus the hydrodynamic diameter will almost certainly differ from other size diameters isolated by other techniques and users of the PCS technique need to be aware of the distinction of the various descriptors of particle diameter before making comparisons between techniques. Notwithstanding the preceding sentence, the technique is widely applied in industry and academia as both a research and development tool and as a QC method for the characterization of submicron systems.
SCOPE
1.1 This guide deals with the measurement of particle size distribution of suspended particles, which are solely or predominantly sub-100 nm, using the photon correlation (PCS) technique. It does not provide a complete measurement methodology for any specific nanomaterial, but provides a general overview and guide as to the methodology that should be followed for good practice, along with potential pitfalls.  
1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard.  
1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety, health, and environmental practices and determine the applicability of regulatory limitations prior to use.  
1.4 This international standard was developed in accordance with internationally recognized principles on standardization established in the Decision on Principles for the Development of International Standards, Guides and Recommendations issued by the World Trade Organization Technical Barriers to Trade (TBT) Committee.

General Information

Status
Published
Publication Date
31-Jan-2021
Technical Committee
E56 - Nanotechnology

Relations

Effective Date
01-Feb-2024
Effective Date
01-Apr-2019
Effective Date
01-Oct-2016
Effective Date
01-May-2014
Effective Date
01-Apr-2014
Effective Date
01-May-2013
Effective Date
01-May-2013
Effective Date
01-Nov-2011
Effective Date
01-Oct-2010
Effective Date
01-Jun-2010
Effective Date
01-Mar-2009
Effective Date
01-Oct-2008
Effective Date
01-Oct-2008
Effective Date
01-Apr-2007
Effective Date
15-Nov-2006

Overview

ASTM E2490-09(2021), titled Standard Guide for Measurement of Particle Size Distribution of Nanomaterials in Suspension by Photon Correlation Spectroscopy (PCS), provides practical guidelines for measuring the particle size distribution of nanomaterials suspended in liquids using photon correlation spectroscopy (PCS). PCS, also known as dynamic light scattering (DLS) or quasi-elastic light scattering (QELS), is a widely adopted first principles technique for characterizing particles typically below 100 nm. This ASTM standard is essential for both industry and academia, serving research and development, as well as quality control (QC) in the field of nanotechnology.

Key Topics

  • Measurement Range: PCS is effective for analyzing particle sizes from the sub-nanometer scale up to the onset of sedimentation, predominantly for particles smaller than 100 nm in suspension.
  • Methodology: The guide describes the application of PCS to slurries or suspensions, specifying that the technique measures the hydrodynamic size of particles in their dispersing medium, which often differs from sizes reported by other methods.
  • Instrument Verification: Verification protocols are recommended, typically involving NIST-traceable standards to ensure instrument performance.
  • Sample Preparation: Highlights the importance of appropriate dispersion, avoidance of contamination, and maintaining colloidal stability, including the use of diluents and stabilizers specific to the sample.
  • Practical Considerations: Addresses the differences in particle size values obtained using PCS versus other techniques, underlining the need to understand and compare the various descriptors of particle diameter.

Applications

  • Nanomaterials Characterization: Crucial for determining size distribution in nanoparticles used in pharmaceuticals, coatings, biomaterials, and other advanced materials.
  • Quality Control and Research: Used in laboratories and industrial settings to monitor consistency and quality of nanomaterials during production.
  • Material Development: Supports the development of new nanomaterials by providing key insights into particle size, which can influence the physical, chemical, and biological properties of materials.
  • Colloidal Suspension Analysis: Assists in examining the stability and behavior of colloidal dispersions, including assessing agglomeration and changes in particle dynamics.
  • Academic Research: Enables accurate characterization in academic studies focused on nanoscale systems, ensuring reproducibility and comparability of results.

Related Standards

Utilizing ASTM E2490-09(2021) alongside complementary standards can provide a more robust framework for nanoparticle characterization:

  • ASTM E177 - Practice for Use of Terms Precision and Bias in ASTM Test Methods
  • ASTM E691 - Practice for Conducting an Interlaboratory Study to Determine the Precision of a Test Method
  • ASTM E1617 - Practice for Reporting Particle Size Characterization Data
  • ASTM F1877 - Practice for Characterization of Particles
  • ISO 13321 - Particle size analysis-Photon correlation spectroscopy
  • ISO 13320-1 - Particle size analysis-Laser diffraction methods-General principles
  • ISO 14488 - Particulate material-Sampling and sample splitting for the determination of particulate properties

Practical Value

Adhering to ASTM E2490-09(2021) ensures that particle size distribution measurements of nanomaterials in liquid suspension are robust, reproducible, and fit for purpose. Applying this standard helps organizations meet international expectations for nanoparticle characterization and supports regulatory compliance. It enables stakeholders to make informed decisions about material performance, stability, and suitability for specific applications in research and industry.

By following best practices outlined in this standard, laboratories and manufacturers improve the reliability of their particle sizing results, strengthen quality assurance, and facilitate innovation in nanotechnology.

Keywords: Particle size distribution, nanomaterials, photon correlation spectroscopy, PCS, dynamic light scattering, DLS, particle characterization, ASTM E2490, nanoparticle analysis, colloidal suspension, quality control, ISO 13321, nanotechnology standards.

Buy Documents

Guide

ASTM E2490-09(2021) - Standard Guide for Measurement of Particle Size Distribution of Nanomaterials in Suspension by Photon Correlation Spectroscopy (PCS)

English language (16 pages)
sale 15% off
sale 15% off

Get Certified

Connect with accredited certification bodies for this standard

ECOCERT

Organic and sustainability certification.

COFRAC France Verified

Eurofins Food Testing Global

Global leader in food, environment, and pharmaceutical product testing.

COFRAC Luxembourg Verified

Intertek Bangladesh

Intertek certification and testing services in Bangladesh.

BAB Bangladesh Verified

Sponsored listings

Frequently Asked Questions

ASTM E2490-09(2021) is a guide published by ASTM International. Its full title is "Standard Guide for Measurement of Particle Size Distribution of Nanomaterials in Suspension by Photon Correlation Spectroscopy (PCS)". This standard covers: SIGNIFICANCE AND USE 5.1 PCS is one of the very few techniques that are able to deal with the measurement of particle size distribution in the nano-size region. This guide highlights this light scattering technique, generally applicable in the particle size range from the sub-nm region until the onset of sedimentation in the sample. The PCS technique is usually applied to slurries or suspensions of solid material in a liquid carrier. It is a first principles method (that is, calibration in the standard understanding of this word, is not involved). The measurement is hydrodynamically based and therefore provides size information in the suspending medium (typically water). Thus the hydrodynamic diameter will almost certainly differ from other size diameters isolated by other techniques and users of the PCS technique need to be aware of the distinction of the various descriptors of particle diameter before making comparisons between techniques. Notwithstanding the preceding sentence, the technique is widely applied in industry and academia as both a research and development tool and as a QC method for the characterization of submicron systems. SCOPE 1.1 This guide deals with the measurement of particle size distribution of suspended particles, which are solely or predominantly sub-100 nm, using the photon correlation (PCS) technique. It does not provide a complete measurement methodology for any specific nanomaterial, but provides a general overview and guide as to the methodology that should be followed for good practice, along with potential pitfalls. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety, health, and environmental practices and determine the applicability of regulatory limitations prior to use. 1.4 This international standard was developed in accordance with internationally recognized principles on standardization established in the Decision on Principles for the Development of International Standards, Guides and Recommendations issued by the World Trade Organization Technical Barriers to Trade (TBT) Committee.

SIGNIFICANCE AND USE 5.1 PCS is one of the very few techniques that are able to deal with the measurement of particle size distribution in the nano-size region. This guide highlights this light scattering technique, generally applicable in the particle size range from the sub-nm region until the onset of sedimentation in the sample. The PCS technique is usually applied to slurries or suspensions of solid material in a liquid carrier. It is a first principles method (that is, calibration in the standard understanding of this word, is not involved). The measurement is hydrodynamically based and therefore provides size information in the suspending medium (typically water). Thus the hydrodynamic diameter will almost certainly differ from other size diameters isolated by other techniques and users of the PCS technique need to be aware of the distinction of the various descriptors of particle diameter before making comparisons between techniques. Notwithstanding the preceding sentence, the technique is widely applied in industry and academia as both a research and development tool and as a QC method for the characterization of submicron systems. SCOPE 1.1 This guide deals with the measurement of particle size distribution of suspended particles, which are solely or predominantly sub-100 nm, using the photon correlation (PCS) technique. It does not provide a complete measurement methodology for any specific nanomaterial, but provides a general overview and guide as to the methodology that should be followed for good practice, along with potential pitfalls. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety, health, and environmental practices and determine the applicability of regulatory limitations prior to use. 1.4 This international standard was developed in accordance with internationally recognized principles on standardization established in the Decision on Principles for the Development of International Standards, Guides and Recommendations issued by the World Trade Organization Technical Barriers to Trade (TBT) Committee.

ASTM E2490-09(2021) is classified under the following ICS (International Classification for Standards) categories: 71.100.01 - Products of the chemical industry in general. The ICS classification helps identify the subject area and facilitates finding related standards.

ASTM E2490-09(2021) has the following relationships with other standards: It is inter standard links to ASTM E1617-09(2024), ASTM E1617-09(2019), ASTM F1877-16, ASTM E177-14, ASTM E1617-09(2014)e1, ASTM E691-13, ASTM E177-13, ASTM E691-11, ASTM E177-10, ASTM F1877-05(2010), ASTM E1617-09, ASTM E177-08, ASTM E691-08, ASTM E1617-97(2007), ASTM E177-06b. Understanding these relationships helps ensure you are using the most current and applicable version of the standard.

ASTM E2490-09(2021) is available in PDF format for immediate download after purchase. The document can be added to your cart and obtained through the secure checkout process. Digital delivery ensures instant access to the complete standard document.

Standards Content (Sample)


This international standard was developed in accordance with internationally recognized principles on standardization established in the Decision on Principles for the
Development of International Standards, Guides and Recommendations issued by the World Trade Organization Technical Barriers to Trade (TBT) Committee.
Designation: E2490 − 09 (Reapproved 2021)
Standard Guide for
Measurement of Particle Size Distribution of Nanomaterials
in Suspension by Photon Correlation Spectroscopy (PCS)
This standard is issued under the fixed designation E2490; the number immediately following the designation indicates the year of
original adoption or, in the case of revision, the year of last revision.Anumber in parentheses indicates the year of last reapproval.A
superscript epsilon (´) indicates an editorial change since the last revision or reapproval.
1. Scope 2.2 ISO Standards:
ISO 13320-1Particle size analysis — Laser diffraction
1.1 This guide deals with the measurement of particle size
methods — Part 1: general principles
distribution of suspended particles, which are solely or pre-
ISO 14488Particulate material — Sampling and sample
dominantly sub-100 nm, using the photon correlation (PCS)
splitting for the determination of particulate properties
technique. It does not provide a complete measurement meth-
ISO 13321Particle size analysis — Photon correlation
odology for any specific nanomaterial, but provides a general
spectroscopy
overview and guide as to the methodology that should be
followed for good practice, along with potential pitfalls.
3. Terminology
1.2 The values stated in SI units are to be regarded as
3.1 Definitions of Terms Specific to This Standard:
standard. No other units of measurement are included in this
3.1.1 Some of the definitions in 3.1 will differ slightly from
standard.
those used within other (non-particle sizing) standards (for
1.3 This standard does not purport to address all of the
example, repeatability, reproducibility). For the purposes of
safety concerns, if any, associated with its use. It is the
this guide only, we utilize the stated definitions, as they enable
responsibility of the user of this standard to establish appro-
the isolation of possible errors or differences in the measure-
priate safety, health, and environmental practices and deter-
ment to be assigned to instrumental, dispersion or sampling
mine the applicability of regulatory limitations prior to use.
variation.
1.4 This international standard was developed in accor-
3.1.2 correlation coeffıcient, n—measure of the correlation
dance with internationally recognized principles on standard-
(or similarity/comparison) between 2 signals or a signal and
ization established in the Decision on Principles for the
itself at another point in time.
Development of International Standards, Guides and Recom-
3.1.2.1 Discussion—If there is perfect correlation (the sig-
mendations issued by the World Trade Organization Technical
nals are identical), then this takes the value 1.00; with no
Barriers to Trade (TBT) Committee.
correlation then the value is zero.
3.1.3 correlogram or correlation function, n—graphicalrep-
2. Referenced Documents
resentation of the correlation coefficient over time.
2.1 ASTM Standards:
3.1.3.1 Discussion—This is typically an exponential decay.
E177Practice for Use of the Terms Precision and Bias in
3.1.4 cumulants analysis, n—mathematical fitting of the
ASTM Test Methods
correlation function as a polynomial expansion that produces
E691Practice for Conducting an Interlaboratory Study to
some estimate of the width of the particle size distribution.
Determine the Precision of a Test Method
3.1.5 diffusion coeffıcient (self or collective), n—a measure
E1617Practice for Reporting Particle Size Characterization
of the Brownian motion movement of a particle(s) in a
Data
medium.
F1877Practice for Characterization of Particles
3.1.5.1 Discussion—After measurement, the value is be
inputted into in the Stokes-Einstein equation (Eq 1, see
This guide is under the jurisdiction of ASTM Committee E56 on Nanotech-
7.2.1.2(4)). Diffusion coefficient units in photon correlation
nology and is the direct responsibility of Subcommittee E56.02 on Physical and 2
spectroscopy (PCS) measurements are typically µm /s.
Chemical Characterization.
Current edition approved Feb. 1, 2021. Published February 2021. Originally
3.1.6 Mie region, n—in this region (typically where the size
approved in 2008. Last previous edition in 2015 as E2490–09 (2015). DOI:
of the particle is greater than half the wavelength of incident
10.1520/E2490-09R21.
For referenced ASTM standards, visit the ASTM website, www.astm.org, or
contact ASTM Customer Service at service@astm.org. For Annual Book of ASTM
Standards volume information, refer to the standard’s Document Summary page on Available fromAmerican National Standards Institute (ANSI), 25 W. 43rd St.,
the ASTM website. 4th Floor, New York, NY 10036, http://www.ansi.org.
Copyright © ASTM International, 100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA 19428-2959. United States
E2490 − 09 (2021)
light), the light scattering behavior is complex and can only be 3.1.13 rotational diffusion, n—a process by which the equi-
interpreted with a more rigorous and exact (and all- librium statistical distribution of the overall orientation of
encompassing) theory. molecules or particles is maintained or restored.
3.1.6.1 Discussion—This more exact theory can be used
3.1.14 translational diffusion, n—a process by which the
instead of the Rayleigh and Rayleigh-Gans-Debye approxima-
equilibrium statistical distribution of molecules or particles in
tions described in 3.1.8 and 3.1.9. The differences between the
space is maintained or restored.
approximations and exact theory are typically small in the size
3.1.15 z-average, n—harmonic intensity weighted average
range considered by this standard. Mie theory is needed in
particlediameter(thetypeofdiameterthatisisolatedinaPCS
order to convert an intensity distribution to one based on
experiment; a harmonic-type average is usual in frequency
volume or mass.
analyses) (see 8.9).
3.1.7 polydispersityindex(PI),n—descriptorofthewidthof
3.2 Acronyms:
theparticlesizedistributionobtainedfromthesecondandthird
3.2.1 APD—avalanche photodiode detector
cumulants (see 8.3).
3.2.2 CONTIN—mathematical program for the solution of
3.1.8 Rayleigh-Gans-Debye region, n—inthisregion(stated
non-linear equations created by Stephen Provencher and ex-
to be where the diameter of the particle is up to half the
tensively used in PCS (1).
wavelength of incident light), the scattering tends to the
3.2.3 CV—coefficient of variation
forward direction, and again, an approximation can be used to
3.2.4 DLS—dynamic light scattering
describe the behavior of the particle with respect to incident
light. 3.2.5 NNLS—non-negative least squares
3.1.9 Rayleigh region, n—size limit below which the scat- 3.2.6 PCS—photon correlation spectroscopy
tering intensity is isotropic—that is, there is no angular
3.2.7 PMT—photomultiplier tube
dependence for unpolarized light.
3.2.8 QELS—quasi-elastic light scattering
3.1.9.1 Discussion—Typically, this region is stated to be
3.2.9 RGB—Rayleigh-Gans Debye
where the diameter of the particle is less than a tenth of the
wavelength of the incident light. In this region a mathematical
4. Summary of Guide
approximation can be used to predict the light-scattering
behavior. 4.1 Thisguideaddressesthetechniqueofphotoncorrelation
spectroscopy (PCS) alternatively known as dynamic light
3.1.10 repeatability, n—in PCS and other particle sizing
scattering (DLS) or quasi-elastic light scattering (QELS) used
techniques, this usually refers to the precision of repeated
for the measurement of particle size within liquid systems. To
consecutive measurements on the same group of particles and
avoidconfusion,everyusageofthetermPCSimpliesthatDLS
isnormallyexpressedasarelativestandarddeviation(RSD)or
or QELS can be used in its place.
coefficient of variation (C.V.).
3.1.10.1 Discussion—The repeatability value reflects the
5. Significance and Use
stability (instrumental, but mainly the sample) of the system
5.1 PCS is one of the very few techniques that are able to
over time. Changes in the sample could include dispersion
deal with the measurement of particle size distribution in the
(desired?) and settling.
nano-size region. This guide highlights this light scattering
3.1.11 reproducibility, n—in PCS and particle sizing this
technique, generally applicable in the particle size range from
usually refers to second and further aliquots of the same bulk
the sub-nm region until the onset of sedimentation in the
sample (and therefore is subject to the homogeneity or other-
sample. The PCS technique is usually applied to slurries or
wise of the starting material and the sampling method em-
suspensions of solid material in a liquid carrier. It is a first
ployed).
principles method (that is, calibration in the standard under-
3.1.11.1 Discussion—In a slurry system, it is often the
standing of this word, is not involved). The measurement is
largest error when repeated samples are taken. Other defini-
hydrodynamically based and therefore provides size informa-
tions of reproducibility also address the variability among
tion in the suspending medium (typically water). Thus the
single test results gathered from different laboratories when
hydrodynamic diameter will almost certainly differ from other
inter-laboratory testing is undertaken. It is to be noted that the
size diameters isolated by other techniques and users of the
same group of particles can never be measured in such a
PCS technique need to be aware of the distinction of the
system of tests and therefore reproducibility values are typi-
various descriptors of particle diameter before making com-
cally be considerably in excess of repeatability values.
parisons between techniques. Notwithstanding the preceding
3.1.12 robustness, n—a measure of the change of the sentence, the technique is widely applied in industry and
requiredparameterwithdeliberateandsystematicvariationsin academiaasbotharesearchanddevelopmenttoolandasaQC
any or all of the key parameters that influence it. method for the characterization of submicron systems.
3.1.12.1 Discussion—For example, dispersion time (ultra-
sound time and duration) almost certainly will affect the
reportedresults.VariationinpHislikelytoaffectthedegreeof
Theboldfacenumbersinparenthesesrefertothelistofreferencesattheendof
agglomeration and so forth. this standard.
E2490 − 09 (2021)
6. Reagents particles (of dust or contamination) falling through the mea-
surement zone (‘number fluctuations’). Ideally the form of the
6.1 In general, no reagents specific to the technique are
correlogramisanexponentialdecaytoaflatbaseline(approxi-
necessary.However,dispersingandstabilizingagentsoftenare
mating to the photon counts in the system without sample) and
requiredforaspecifictestsampleinordertopreservecolloidal
not rise again (again indicating number fluctuations in the
stability during the measurement.Asuitable diluent is used to
data). Manufacturers also provide other means of assuring the
achieve a particle concentration appropriate for the measure-
reliability of the data and is recommended that these protocols
ment. Particle size is likely to undergo change on dilution, as
are consulted, as appropriate.
theionicenvironment,withinwhichtheparticlesaredispersed,
7.1.2 Giventhenatureoftheproducedintensitydistribution
changes in nature or concentration. This is particularly notice-
and the likelihood that the size standard has been certified by
able when diluting a monodisperse latex. A latex that is
-3
electron microscopy (number distribution) care needs to be
measured as 60 nm in1×10 M NaCl can have a hydrody-
-6
exercised in direct comparison of the results. For a completely
namic diameter of over 70 nm in1×10 M NaCl (close to
monodisperse sample, (every particle identical) then the num-
deionized water). In order to minimize any changes in the
ber and intensity distributions are essentially identical. For the
system on dilution, it is common to use what is commonly
real-worldsituationwherethereissomepolydispersity(width)
called the “mother liquor”. This is the liquid in which the
to the distribution, then the number distribution is expected to
particles exist in stable form and is usually obtained by
be smaller than the produced intensity distribution; the greater
centrifuging of the suspension or making up the same ionic
the polydispersity, then the larger the differences between
nature of the dispersant liquid if knowledge of this material is
intensity, volume and number distributions. Note that verifica-
available. Many biological materials are measured in a buffer
tion of a system only demonstrates that the instrument is
(often phosphate), which confers the correct (range of) condi-
performing adequately with the prescribed standard materials.
tions of pH and ionic strength to assure stability of the system.
Practical considerations for real-world materials (especially
Instability (usually through inadequate zeta potential (2) can
‘dispersion’ if utilized or if the distribution is relatively
promote agglomeration leading to settling or sedimentation in
polydisperse) mean that the method used to measure that
a solid-liquid system or creaming in a liquid-liquid system
real-world material needs to be carefully evaluated for preci-
(emulsion). Such fundamental changes interfere with the sta-
sion (repeatability).
bilityofthesuspensionandneedtobeminimizedastheyaffect
the quality (accuracy and repeatability) of the reported mea-
7.2 Measurement:
surements.Thesearelikelytobeinvestigatedinanyrobustness
7.2.1 Introduction:
experiment.
7.2.1.1 The measurement of particle size distribution in the
nano- (sub 100 nm) region by light scattering depends on the
7. Procedure
interaction of light with matter and the random or Brownian
7.1 Verification: motion that particle exhibits in liquid medium in free suspen-
sion.There must be an inhomogeneity in the refractive indices
7.1.1 The instrument to be used in the determination should
of particle and the medium within which it exists in order for
be verified for correct performance, within pre-defined quality
light scattering to occur. Without such an inhomogeneity (for
control limits, by following protocols issued by the instrument
example, in so-called index-matched systems) there is no
manufacturer. These confirmation tests normally involve the
scattering and the particle is invisible to light and no measure-
use of one or more NIST-traceable particle size standards. In
-6
ments can be made by the PCS or any other light scattering
the sub-micron (<1×10 m) region, then these standards (for
technique.
example, NIST, Duke Scientific- now part of Thermo Fisher
Scientific) tend to be nearly monodisperse (that is, narrow, 7.2.1.2 For particles <100 nm, as considered in this guide,
single mode distribution, PI < 0.1) and, while confirming the x several facts hold true:
(size) axis, do not verify the y (or quantity axis). Further, there (1)The amount of scattering is weak in relative terms and
is a lack of available standards for the sub-20 nm region and depends highly on the size of the particle. In the Rayleigh
therefore biological materials (for example, bovine serum approximation region (typically d < λ/10 in which d is the
albumin–BSA, cholesterol, haem, size controlled dendrimers, diameterofparticleand λisthewavelengthoflightemployed),
Au sols) of known size (often by molecular modeling) can be then this intensity of scattering is proportional to r –or
2 2
utilized. Note that PCS is a first principles measurement and (volume) or (relative molecular mass) . With a commonly
thus calibration in the formal sense (adjustment of the instru- utilized helium-neon (He-Ne) laser (632.8 nm), then this limit
ment to read a true and known value) cannot be undertaken. In is approximately 60 nm. This means, in practice, that a 60 nm
the event of a “failure” at the verification stage, then the issues particlescatters1milliontimesasmuchlightasa6nmparticle
to check involve quality of the dilution water, state of disper- of the same composition. Thus, it is imperative that solutions
sion and stability of the standard under dilution plus instru- are kept free of any contaminating particles, for example dust,
mental issues such as thermal stability, cleanliness and align- that are often present in the local environment and is usually
ment of optical components. The raw correlogram data can be considerably larger than the material that requires measure-
examined during and after acquisition. Such examination ment. This means filtering liquids used to contain or dilute the
requires some experience and training. During data acquisition particlestoaleastthesamelevelasthesizeoftheparticlesthat
one looks for stable count level without jumps or leaps in the require characterizing. The very weak scattering means that
level of the scattering counts that could be produced by conventional light detectors (for example, silicon photodiodes)
E2490 − 09 (2021)
as used in other light scattering technique (for example, laser
D = the (measured) diffusion coefficient.
diffraction) cannot be used. The technique of correlating the
(5)Note that, in Eq 1, the density of the particle plays no
signal with itself combined with photon counting techniques is
role in Brownian motion (although, of course, it does in
thus employed; the principle being that the noise is random
settling; see Point 9 below), even though this appears to be
while the Brownian motion is fixed. Constantly subtracting the
counterintuitive to first instinct. Note also that a hydrodynamic
noise from the overall signal leaves the retained Brownian
radius(ordiameter)isderived.Thisreferstoanequivalentsize
motion signal.
in spherical terms to that of a particle moving with the same
(2)The intensity of scattering in the Rayleigh region is
diffusion coefficient as the observed particle. Thus, for an
inverselyproportionaltothefourthpowerofthewavelengthof
irregularly shaped particle or one with significant external
light employed.Thus, if the wavelength of incident light could
morphology(orboth),thenthederiveddiameterisnotlikelyto
be halved then the intensity of scattering that would be
correspond to any measured axis of the image of the particle.
observed is increased by a factor of 16. It is common practice
Theviscosityreferstothemediumthattheparticleisdispersed
to use lasers of a lower wavelength than a He-Ne (632.8 nm)
in. In a dilute system it is assumed that the particles do not
to increase the amount of scattering and, hence, signal. This is
interact, so the viscosity can be assumed to be that of the
usually preferable to increasing the power of the laser with
medium or diluent. In higher concentrations, particles are
possible undesired effects (for example, heating, convection
likely to be in regions of hindered mobility and the effective
currents). However, note that lower wavelengths sometimes
viscosityisthushigherthanthatoftheparticle-freesuspension
overlapanabsorptionedgeforsomemolecularspeciesleading
medium.
to a loss of signal intensity. Potential fluorescence issues also
(6)Note the term diffusion coefficient.There are two types
need consideration, as the detectors used for photon counting
of diffusion to be considered for particles in free suspension:
areusuallyresponsivetoawidewavelengthrange.Sometimes,
(a)Translational, where the so-called Stokes-Einstein
narrow bandwidth filters can be employed to ensure that only
relationshipgiveninEq1applies.Rewritingwiththediffusion
lightofthecorrectwavelengthisdetected.Suchmeansusually
coefficient on the left:
reduce or compromise the actual signal seen by the detector.
kT
The detector is typically either a photon multiplier tube (PMT)
D 5 (2)
t
6πηR
or avalanche photodiode (APD) as both count individual
h
(b) Rotational,wheretheStokes-Einstein-Debyerelation
photons.
applies:
(3)For spherical particles, there is limited (assumed to be
no)angulardependenceofthescatteringintheRayleighregion
kT
D 5 (3)
r
for unpolarized light. This effective isotropic (or equal) scat-
8πη R
~ !
h
tering means that only a single detector angle need be
(7) Association of particles (or molecules) leads to
employed to measure the scattered light. For non-spherical
changes in the rotational diffusion coefficient, which also
particles,rotationalmotionwillgiveangulardependence(even
affects the translational diffusion coefficient. Hence, interac-
in the Rayleigh region). Above the Rayleigh region (>60 nm)
tions between particles can complicate the interpretation of the
the light starts to be scattered towards the forward angle—in
observed diffusion coefficient, which for nonspherical
layman’stermsitbecomesegg-shapedwithmoreforwardthan
particles, is a combination of the translational and rotational
back-scatter—and up to λ/2 (~300 nm for a He-Ne laser at
diffusion coefficients. These particle-particle interactions tend
632.8 nm) then the Rayleigh-Gans-Debye approximation
to be concentration rather than size dependent, and both
works well as there is little structure to the observed polar
translational and rotational diffusion coefficients are dependent
pattern of scattering. Thus, in the <100 nm region of interest,
on the viscosity of the surrounding fluid.
then approximations can be usefully employed and a full
(8)The motion of the particles must be random. Nonran-
explanation of the interaction of light with matter (Mie theory)
dom particle motion is the main reason for apparent failure or
need not be invoked unless the information is required to be
nonapplicabilityofthetechnique.Suchnonrandommotioncan
presented on a volume or number basis (see 8.9).
occur through convection currents being present in the system
(4)The measurement of size in the sub-100 nm region
or through particles (too large or dense for the technique)
relies on the measurement of the amount of Brownian motion
settling during the measurement sequence. Therefore, accurate
(in particular the diffusion coefficient) of the particle as
temperaturecontrolandstabilizationaremandatory.Ifsettling/
formulated in the Stokes-Einstein equation:
sedimentation occurs in the measurement, other than to a very
minor extent, then the result is almost certainly compromised,
kT
R 5 (1)
h as it will reflect a changing and unstable system. If visible
6πηD
~ !
settled solid is present at the bottom of a container, then it is
where:
very likely that the PCS technique is not recommended. In this
R = the hydrodynamic radius,
case conventional laser light scattering (laser diffraction) is
h
k = Boltzmann’s constant (= R/N where R = gas constant
likely to be the preferred technique. If settling can be observed
and N = Avogadro’s number),
either in the measurement container or in the measurement
T = the absolute temperature (Kelvin),
cuvette, then it is certain that the original material being
π = the universal constant,
measured is not “nano” or is unstable during the measurement
η = the viscosity of the medium, and
time frame.
E2490 − 09 (2021)
(9)With respect to size and density, consider the calcula- 7.3.2 Inviewingtheintensityofscatteredlightfromagroup
tions in Table 1 using Stokes’ Law. of suspended moving particles, there is a temporal fluctuation
(10)It can be deduced from Table 1 that if a material is
of this light intensity (the “speckle” pattern) in the same way
truly“nano”(thatis,<100nm),ittendstoremaininpermanent
thattheleavesofatree,inwindyconditions,attenuatethelight
suspension and exhibits little if any settling tendency. In many
of the sun and give light fluctuations over a short period of
situations, for example a gel, then the particle density is
time, but the overall light intensity is not altered. Small
significantly lower due to incorporation of water into the
particles diffuse quickly and thus exhibit more rapid fluctua-
particle matrix and thus the settling time increased further.
tions on a short time frame than larger particles, which diffuse
(11)Sometimes it is thought that placing the particles in a
more slowly. Over a very short time frame, δt, (typically units
material of higher viscosity reduces or even eliminates any
of nanoseconds or milliseconds), then the instantaneous signal
settling tendency.This is true, but the Brownian motion is also
intensity correlates well with the signal at time = 0. Light
reduced accordingly and no gain is achieved (in the same way
fluctuationsthatchangemorerapidly(smallparticles)losethis
that swimming in concentrated sucrose solution is no quicker
correlation more quickly than larger particles. If the instanta-
or slower than in water).
neoussignalintensitiesarestoredthenitispossibletocompare
(a)Most dry powder materials cannot be fully dispersed
the values of the received signals over time with those at the
back to a primary size and thus size measurements from
start of the experiment (or indeed with that at any other period
diffusion reflect the state of agglomeration of the system rather
of time). The degree of comparison between 2 signals or 1
thantoaprimarysize.Hencethisguideassumesthatthereader
signal with itself is represented by the correlation coefficient,
has access to a well dispersed liquid suspension or preparation
usually given the symbol [G], which can range from 1 (perfect
of nano-size particles for the measurement.
correlation, the signal is identical to the signal it is being
(12) Note from Eq 1 the obvious points that:
compared against) down to zero (no correlation). It can easily
(a)As the size of particle increases, then the speed of
be shown (2) that this correlation coefficient decays exponen-
Brownian motion decreases.
tially with time for monodisperse particles (that is, all the
(b)As the viscosity of the medium increases, then the
particles are identical in size). See Fig. 1. The decay in
speed of Brownian motion decreases.
correlation is more rapid for a small particle in comparison to
(c)As the temperature is increased, then the speed of
a larger one (see Fig. 2).
Brownian motion increases correspondingly.
7.3 Theoretical Background to the Correlation Function:
8. Interpretation of the Correlation Function
7.3.1 It is necessary to measure the diffusion coefficient to
8.1 Introduction:
inputintoEq1inordertoderiveaparticlesize.Notethatsuch
a single input would only produce a single size value. This 8.1.1 Thereareanumberofwaystointerpretthecorrelation
section deals with the measurement of the diffusion coefficient
functionandthissectiondescribesthemorecommonlyutilized
and the objective of providing a particle size distribution from
techniques.
the measured data.
TABLE 1 Settling Calculations Based on Stokes’ Law as a Function of Size and Density at Constant Temperature
-2
ρ (Water) η (Water) TimetoSettle1cm(1×10 m) in Water
Diameter Diameter ρ (Material)
kg/m 298K, Poise
µm nm kg/m
Minutes Hours Days
0.01 10 2500 1000 0.008905 1815494.39 30258 1261
0.1 100 2500 1000 0.008905 18154.94 302.58 12.61
1 1000 2500 1000 0.008905 181.55 3.03 0.126
10 10000 2500 1000 0.008905 1.82 0.03 0.001
100 100000 2500 1000 0.008905 0.02 0.00 0.000
0.01 10 3500 1000 0.008905 1089296.64 18154.94 756
0.1 100 3500 1000 0.008905 10892.97 181.55 7.56
1 1000 3500 1000 0.008905 108.93 1.82 0.076
10 10000 3500 1000 0.008905 1.09 0.02 0.001
100 100000 3500 1000 0.008905 0.01 0.00 0.000
0.01 10 4200 1000 0.008905 851013.00 14183.55 591
0.1 100 4200 1000 0.008905 8510.13 141.84 5.91
1 1000 4200 1000 0.008905 85.10 1.42 0.059
10 10000 4200 1000 0.008905 0.85 0.01 0.001
100 100000 4200 1000 0.008905 0.01 0.00 0.000
0.01 10 5500 1000 0.008905 605164.80 10086.08 420
0.1 100 5500 1000 0.008905 6051.65 100.86 4.20
1 1000 5500 1000 0.008905 60.52 1.01 0.042
10 10000 5500 1000 0.008905 0.61 0.01 0.000
100 100000 5500 1000 0.008905 0.01 0.00 0.000
E2490 − 09 (2021)
FIG. 1 Diagrammatic Representation of the Intensity Fluctuations with Small and Large Particles
FIG. 2 Traditional PCS Measurement Indicating the Main Components of a Typical System
8.2 Linear Analysis: system. We note that such an analysis only provides a mean
8.2.1 In the simplest analysis of the plot of the correlation size and no width of distribution is assumed or calculated.
coefficient against time, a straight line is fitted to the exponen- Clearlythisassumptionisonlyvalidfornarrowdistributions—
tial decay by taking logarithms. Thus a monodisperse sample ideally monodisperse.Agenuinely bimodal sample produces a
generates a straight line for the Log[G] versus Time plot. The singlemeanvaluewhenthecumulantsanalysisisusedbecause
slope of the plot is related to the reciprocal of the mean size of the fitting of a straight line to the log[G] data set is not
the particle system and the constant represents the noise in the appropriate. This z-average mean value is then intermediate
E2490 − 09 (2021)
between the 2 separate mean values of the each of the deconvolution. Any noise in the signal creates uncertainty in
components of the bimodal. For the general case situation in the derived solution. Worse still, with noise, the number of
which the log[G] versus Time plot is not linear (that is the possible solutions tends to infinity and errors in these solutions
norm!), then see 8.3. are mathematically unbounded. In particular, more peaks can
alwaysbeaddedinandthusgivebetterandcloserfitsbetween
8.3 Polydisperse Samples—Cumulants Analysis:
the observed plots and those calculated. This does not mean
8.3.1 First, note the important point that many of the
that extra peaks provide a better solution—they only deal with
techniques discussed below relate to situations where there is
the vagaries of any variation in the measured and calculated
likely to be material > 100 nm present in the sample (and thus
correlation curves. Note that fitting the measured data and
the distribution is broader than “monodisperse”).The situation
deconvoluting within prescribed and predetermined experi-
is likely to be simpler (smaller values of polydispersity index)
mental error limits is not guaranteed to yield a correct answer.
for samples that are 100% < 100 nm, although polydisperse
This is disturbing to the uninitiated!
characterizedstandardsinthisregionarenon-existentandthus,
8.3.6 Johnsen and Brown (4) list the following ways of
this point is difficult to verify in practice.
analyzing the raw correlation data: cumulants, Marquardt,
8.3.2 Forsamplesthatexhibitsomewidthtothedistribution
S-exponential sums, Lambda depression, linear programming
(that is, contain a range of sizes), then the logarithmic decay
with sequence statistics (Zimmermann I, Zimmermann A,
plot of the correlation function is not linear. This curve can be
Zimmermann B, Jakeš), z-transform with spike recovery,
fitted by a polynomial of any desired number of terms or
exponential sampling, profiled singular value, histogram,
indeed can be fitted by any sum of any type of simple or
CONTIN,RILIE,REPES,MAXENT,andsoon.Finsy (5)also
complex curves. Therefore, we need to take extreme care in
deals with these analytical tools.
thisregion.Whilecomputerscancalculatethenumberofterms
8.3.7 In addition other schemes exist. In all the cases
based on an arbitrary number of terms, the end-user needs to
indicated in 8.3.6, the authors show that the above algorithms
decide whether it is a reasonable and sensible process to
can be “fooled” in pre-defined situations and that different
undertake since more decimal places or numbers in the end
particles size distributions arise as a result. Chapter VII in
resultimplynothingaboutaccuracyorresolutionorsensitivity.
Chu’s standard text (6) deals with similar issues. Stephen
All the preceding assessments of the quality of the instrument
Provencher terms this deconvolution an “apparently hopeless
and result need to be verified for the system being measured.
problem” (Lines 8 and 9 of p. 93 in Ref (7)).
8.3.3 Inthesimplest(Taylor/Maclaurin’sseriestype)expan-
8.3.8 Notwithstanding the above caveats, the most common
sion of the non-linear form of the Log[G] decay, then we can
waysofderivingadistributionfromthenon-linearlogarithmic
express the form of the curve as:
correlation plot involve first constraining the solution to give
2 3 4
positive sizes (x axis) and positive percentages (y axis) in the
Log@G# 5 a1bτ1cτ 1dτ 1fτ … (4)
NNLS (Non-Negative Least Squares approach). This mini-
where a, b, c, d, etc. are empirically fitted constants to the
mizesthedifferencesbetweenthecalculatedandobserveddata
experimental curve.
(on the basis of the lowest difference between the modulus of
8.3.4 The term b corresponds to the mean size (strictly
the sets) and allows only positive values of size and quantitiy.
speaking, the intensity weighted z-average mean) and the
A further mathematical treatment is then invoked to isolate a
second cumulant (cτ ) can be shown to be related to the
distribution:
variance (standard deviation ) or width of a hypothetical
8.3.8.1 CONTIN—This is a (free) mathematical program
Gaussian distribution as follows:
designed by Dr. Stephen Provencher (1, 7, 8, 9) while working
at EMBI, Heidelberg, Germany—the Max-Planck-Institut für
2c
PolydispersityIndex PI 5 (5)
~ !
Biophysikalische Chemie. The (originally Fortran 66) pro-
b
gram was formulated primarily to deal with inversion of noisy
where the term c is identical to the standard deviation in a
linear equations including Fredholm and Lotka-Volterra equa-
Gaussian distribution and the b value is the Gaussian mean
tions. The use of CONTIN in PCS relates to the general
(identical, of course, to the mode and median for such a
analysisofmulti-exponentialdecay(Laplaceinversion).CON-
distribution).
TIN has the ability to accept pre-conditions (for example,
8.3.5 The deconvolution of an single (measured) exponen-
negativeparticlesizesandnegativepercentagesofcomponents
tial decay curve to a set of exponential curves, each corre-
gives mathematically feasible solutions but can be ruled out in
sponding to a single particle size, that sum to give the
advance with CONTIN) that is probably (but not definitely!)
measuredexponentialisclearlyanill-conditionedproblemand
likelytoimprovetheaccuracyandresolutionofthemathemati-
taking further terms beyond the fifth power (which would
cal solutions. The program does not provide a single, unique
exactly fit six points or histogram bins if these were assumed)
solution, although a preferred solution is indicated. Rather a
is usually meaningless as this degree of information is not
numberofpossiblesolutionsaregivenandtheuserisgiventhe
inherent in the raw plot. Normally we do not go beyond the
opportunity to inspect these and use auxiliary information in
third term (3). The corollary to this is that information such as
order to select the user’s preferred solution. The indicated
x (from diffraction—the 90% undersize percentile; 90% by
volume less than this value), that the end-user is likely to be
familiar with, become meaningless if only six channels of
See a list of references and original manuals on the website: http://s-
information are the maximum possible from a 5th order provencher.com/pages/contin/shtml.
E2490 − 09 (2021)
preferred solution is generated on the basis of the best-fit/ as in the standard least-squares approach mentioned earlier
smoothest solution (it dislikes—“rejects” is not the correct while describing NNLS.
term—solutions with sharp boundaries) with the minimum
8.3.8.3 Multi-Angle Information—Intermsoflargersystems
number of peaks—what is called “parsimony”. This assump-
(>>100 nm and therefore not relevant to this guide) there is
tion is clearly in error for mixtures of more than one (possibly
thenavariationinscatteringintensitywithangle(thescattering
monodisperse) component (sharp not smooth distributions and
is non-isotropic in contrast to the sub-100 nm (approximate)
multiple peaks), so, as with other deconvolution methods, it
regime.Any angular variation in scattering can be used (along
can produce solutions that are not correct for a known system
with the known optical properties of the particulate system), in
and thus scenarios can always be found to ‘defeat’ such
theory at least, to obtain particle size distribution information.
algorithms. Thus, any auxiliary information (especially with
This area (0.1 µm and higher) is now the preserve of “laser
respecttotheamountofnoiseontheoriginalsignal,something
diffraction” (for example, see ISO 13320-1) where light
that is not easy to define!), is essential in deciding whether any
scattering is involved and a range of other non-optical tech-
given result is reasonable or not. In Ref (5), Provencher also
niques (for example, sedimentation, sieves, electrical sensing
shows areas where CONTIN has problems and also points out
zone) dependent on the size range of the system.
deficits of other deconvolution algorithms (MAXENT, for
8.4 Carrying Out the Measurement:
example).Consulttheliteraturefortheselessusedapproaches.
8.4.1 A generic diagram is shown in Fig. 3.
Many manufacturers of PCS equipment provide a CONTIN
implementation within their software. It is unsure how any 8.4.2 Fig.3showsthe“classic”designwherescatteredlight
proprietary implementation resembles or differs from is detected at a variable angle (often 90°), although for dilute
Provencher’s original implementation. Indeed, the computer small systems (<100 nm) there is little angular dependence.
language may be different for a start. Thus, reading of the
8.4.3 Clearly, over the history of the technique, a number of
equipment supplier’s manual followed possibly by a telephone
technical modifications and developments of the above tradi-
call to the manufacturer may be needed.
tional format are evident in the literature and from reading
8.3.8.2 Multi-Exponential Analysis or Eigenvalue Analysis
manufacturers specification sheets.The suitability of a particu-
of the Laplace Transform—This was the first method that was
lar instrument should be carried out in line with the needs of
used to obtain particle size distribution information from
the application and it is recommended that samples be run in
correlation coefficient decay curves and is mainly of historical
order to examine such factors as resolution and sensitivity.
interest now. It is credited to Pike-Ostrowski and involves
8.4.4 Light (normally of fixed wavelength, coherent, possi-
taking a number of histogram size channels and fitting itera-
bly polarized and of relatively high intensity, that is, a laser)
tively the sums of discrete logarithmically spaced exponentials
illuminates the sample, and the scattered light is detected and
(24 or 32).Thus, the set of predicted exponentials are summed
analyzed. The signal is stored within a correlator (hardware or
toconstructafinalexponential,whichiscomparedagainstthat
software) and the computer processes this raw signal data with
observedintheexperiment.Theexponentialsarethenadjusted
the parameters (laser wavelength, particle refractive index and
to optimize the fit and minimize the residual:
so forth; analytical model) that the operator has predefined. A
Residual 5 SquareRoot Observed 2 Theoretical (6) particle size distribution result is then produced either in
@~ ! #
FIG. 3 Traditional PCS Measurement Indicating the Main Components of a Typical System
E2490 − 09 (2021)
frequency or histogram format. The user needs to check the needed prior to the analysis. The use to which the end results
derived distribution for reasonableness and repeat consecutive areputisalsocrucialespeciallyifeconomicvaluesareatstake
measurements are advised to ascertain the stability of the final (for example, batch control, incoming goods check). We need
answer (dependent both on the stability of the material and the to consider the implications of what an ‘out-of-specification’
mathematics in the deconvolution). Replicate samples allow result will mean in monetary terms. Note that this statement
the sample-to-sample variation to be ascertained. implies that we have a specification in place and test against
this specification.
8.5 Sampling:
8.6.2 Dispersion for small systems often involves the use of
8.5.1 Preparation of a representative sample in stable and
large amount of input (sonication) energy, especially if the
dispersed state is vital to an accurate and meaningful analysis.
material is in a powdered state to start. Some materials
Toobtainthematerialinthisstateisnotatrivialmatter.Useful
(especially biological or those of high aspect ratio) are not
guides are to be found in the NIST Practice Guide Special
likely to withstand huge amounts of energy input. Given that
Publication 960-1, Particle Size Characterization (10) and the
we are to be measuring the Brownian motion of the particulate
first chapter of T. Allen’s Particle Size Measurement (11) as
system, then co-joined (aggregated or agglomerated or both)
well as a large number of ASTM standards, only a limited
groups of primary particles behave as a single larger particle.
number being relevant to nano systems (for example, Practice
This fact needs to be borne in mind, even with microscopy,
C322).ThenowdefunctPart1oftheBS3406seriesdealtwith
where judgment as to what can be considered a single particle,
sampling and this has been partially used within ISO 14488.
orwhetheritisstronglyboundtoitsneighbor,iscertaintogive
The examination of the time trend (size with time, consecutive
interpretation difficulties.
measurements, input energy–sonication) for the particle size
8.6.3 Notealsothatonamassbasisasingle100nmparticle
distribution in a repeatability study is vital to ensure stability
is equivalent to 1 million 1 nm particles. On an intensity basis,
and confidence in the final reported results. Sample-to-sample
as there is a d dependence of scattering on size, a single 100
reproducibilitycanbeassessedbythetakingofreplicaaliquots
nm particle is equivalent to 10 (or a thousand billion) 1 nm
or subsamples from the same bulk lot.
particles. Thus the technique is especially sensitive to any
8.5.2 To take a representative aliquot of material, the
larger or agglomerated particles in the system. It is often
material needs to be moving when the sample is extracted.
desirable to remove (by filtration or centrifugation) even small
With a slurry or suspension, sampling is normally carried out
amounts of any larger material present, recognizing the fact
by pipetting the required amount of sample from a stirred
that this is altering the particle size distribution, in particular,
beaker containing the primary material. If the sample has
onamassbasis.Itistobenotedthatasthemeasurementrelies
settled or is settling, and material is extracted only from the
on the interpretation of a correlogram and that an intensity
supernatant, it is clear that a smaller answer than the bulk
distribution (normalized to 100%) is produced, then a back-
material is obtained. Slurry sampling is notoriously difficult to
ground subtraction ‘count’ of the solvent is not possible or
carry out correctly and the use of a Burt sampler (the slurry
feasible. Thus cleanliness of the background solvent is essen-
equivalent of a spinning riffler for powders) is recommended.
tial and filtration to 20 nm is usual.
8.5.3 The wider the particle size distribution, then more
problems are likely to be encountered throughout the sampling
8.7 Particle Concentration:
and measurement especially if a “distribution” is sought.
8.7.1 A certain concentration of particles is required in the
Statistically, 10000 particles are required in the last size band
system in order that sufficient scattering can be “seen” by the
forastandarderrorof1%,asthestandarderrorisproportional
system—in other words, adequate signal to noise. This is a
0.5
to 1/n , where n is the number of particles. Wider particle
complex situation with the particle size, relative refractive
sizes are subject to greater possibilities of segregation or
index and volume concentration all playing a role. However,
settling which complicate the sampling and measurement
notethatverylowconcentrationsofpoorlyscatteringmaterials
issues although again these are likely to be minimal for truly
(for example, proteins) are not likely to generate adequate
sub-100 nm systems.
signal for reasonable measurement, in a number of situations.
8.5.4 With tiny amounts of sample then subsampling is not
8.7.2 Commensurate with the requirements of sufficient
likely to be statistically admissible
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.

Loading comments...