(Master Thesis) Leavitt Path Algebras
(Master Thesis) Leavitt Path Algebras
Iain Dangerfield
The central concept of this thesis is that of Leavitt path algebras, a notion
introduced by both Abrams and Aranda Pino in [AA1] and Ara, Moreno and Pardo
in [AMP] in 2004. The idea of using a field K and row-finite graph E to generate
an algebra LK (E) provides an algebraic analogue to Cuntz and Krieger’s work with
C ∗ -algebras of the form C ∗ (E) (which, despite the name, are analytic concepts). At
the same time, Leavitt path algebras also generalise the algebras constructed by W.
G. Leavitt in [Le1] and [Le2], and it is from this connection that the Leavitt path
algebras get their name.
Although the concept of a Leavitt path algebra is relatively new, in the years
since the publication of [AA1] there has been a flurry of activity on the subject.
Many results were initially shown for row-finite graphs, then extended to countable
(but not necessarily row-finite) graphs (as in [AA3]) and then finally shown for
completely arbitrary graphs (see, for example, [AR]). Most of the research has
focused on the connections between ring-theoretic properties of LK (E) and graph-
theoretic properties of E (for example [AA2], [AR] and [ARM2]), the socle and socle
series of a Leavitt path algebra ([AMMS1], [AMMS2] and [ARM1]) and analogues
between LK (E) and their C ∗ -algebraic equivalents C ∗ (E) (for example [To]). Some
papers have classified certain sets of Leavitt path algebras, such as [AAMMS], which
classifies the Leavitt path algebras of graphs with up to three vertices (and without
parallel edges).
i
ii
We introduce Leavitt path algebras formally in Chapter 2 and look at various results
that arise from the definition. We also examine simple and purely infinite simple
Leavitt path algebras, as well as the ‘desingularisation’ process, which allows us to
construct row-finite graphs from graphs containing infinite emitters in such a way
that their corresponding Leavitt path algebras are Morita equivalent. In Chapter 3
we examine the socle and socle series of a Leavitt path algebra, while in Chapter
4 we examine Leavitt path algebras that are von Neumann regular, π-regular and
weakly regular, as well as Leavitt path algebras that are self-injective. Finally, in
Appendix A we give a detailed definition of a direct limit, a concept that recurs
throughout this thesis.
Acknowledgements
First and foremost I would like to thank my supervisor John Clark for the amaz-
ing amount of time and effort he put into researching various topics, answering my
many questions and proofreading several versions of this thesis. I would also like to
thank Gonzalo Aranda Pino, Kulumani Rangaswamy, Gene Abrams and Mercedes
Siles Molina for their helpful correpondence in response to my various queries. Fi-
nally, I wish to thank my family, friends and the music of the Super Furry Animals
for providing inspiration throughout the year.
iii
Contents
Abstract i
Acknowledgements iii
1 Preliminaries 1
1.1 Ring Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Module Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Morita Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.4 Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
iv
CONTENTS v
Bibliography 171
Index 174
Chapter 1
Preliminaries
Definition 1.1.1. A ring R has local units if there exists a set of idempotents E
in R such that, for every finite subset X = {x1 , . . . , xn } ⊆ R, there exists an e ∈ E
such that X ⊆ eRe. In this case, exi = xi = xi e for each i = 1, . . . , n and e is said
to be a local unit for the subset X.
Note that if a ring R has identity 1, then {1} is a set of local units for R. In
Chapter 2 we will show that every Leavitt path algebra has local units (but is not
1
CHAPTER 1. PRELIMINARIES 2
When working with rings that do not necessarily have identity we have to take
care with the way certain things are defined. For example, for an arbitrary element
x in an arbitrary ring R we define the two-sided ideal generated by x, denoted
hxi, to be the set
X X X
hxi := ri xsi + rj0 x + xs0k +n·x
i j k
where ri , si , rj0 , s0k ∈ R, n ∈ Z and the sums are finite. If R is unital, it is easy to
P
see that this expression simplifies to the more familiar definition hxi = { i ri xsi :
ri , si ∈ R}. Furthermore, in the more general case that R has local units this
simplification still holds, since we can find a nonzero idempotent e ∈ R for which
ex = x = xe.
Ra := {ra + n · a : r ∈ R, n ∈ Z}.
Once again, in the case that R has local units this simplifies to the more familiar
definition Ra = {ra : r ∈ R}, since a = ea for some idempotent e ∈ R.
If R is a ring with local units, then for any element a ∈ R there exists an
idempotent e ∈ R such that a ∈ eRe, by definition. It is easy to see that eRe is a
subring of R. Furthermore, note that eRe is always unital (with identity e), even
if R is not. The following result concerns the subring eRe. Recall that a ring R is
simple if the only two-sided ideals contained in R are {0} and R itself.
Proposition 1.1.2. Let R be a ring with local units. Then R is simple if and only
if the subring eRe is simple for every nonzero idempotent e ∈ R.
CHAPTER 1. PRELIMINARIES 3
Proof. Suppose that R is simple and let e by any nonzero idempotent in R. To show
that eRe is simple it suffices to show that, for any nonzero element exe ∈ eRe, the
two-sided ideal of eRe generated by exe is equal to all of eRe. Take an arbitrary
nonzero element ex0 e ∈ eRe. Since ex0 e is an element of R and R is simple, the two-
sided ideal of R generated by ex0 e is equal to R. Now take another arbitrary element
P
eye ∈ eRe. Since R = hex0 ei and R has local units we can write y = i ri (ex0 e)si ,
where each ri , si ∈ R. Thus
P P
eye = i eri (ex0 e)si e = i (eri e)(ex0 e)(esi e)
It can be shown (see, for example, [NV, page 6]) that if R is a Z-graded ring and
I is a graded ideal of R, then the quotient ring R/I is also Z-graded. Similarly, if
R and R/I are both Z-graded then I must also be graded.
Proof. Let x ∈ ker(φ) and write x = xn1 + · · · + xnt , where each xni ∈ Rni . Thus
0 = φ(x) = ti=1 φ(xni ). Since φ is a graded homomorphism, φ(xni ) ∈ Sni for each
P
i. However, since S is a graded ring, the element 0 can only be expressed one way
as a sum of homogeneous components from each Sni , namely 0 = 0 + · · · + 0. Thus
for each i ∈ {1, . . . , t} we have φ(xni ) = 0 and so xni ∈ ker(φ), as required.
CHAPTER 1. PRELIMINARIES 5
The following lemma provides a useful way to determine when a principal left
ideal is a minimal left ideal.
Proof. Suppose Rx contains a nonzero left ideal I and take an arbitrary nonzero
a ∈ I. Since a = bx + nx for some b ∈ R and n ∈ N, we have Ra ⊆ Rx. Similarly,
since x ∈ Ra then Rx ⊆ Ra and so Rx = Ra. Since Ra ⊆ I, we must have I = Rx
and so Rx is minimal.
Proof. It is well-known that every element x ∈ J(R) is quasiregular, that is, there
exists a y ∈ R such that x + y = −xy = −yx (see, for example, [D, Chapter 4]).
Suppose that J(R) contains an idempotent e. Then −e ∈ J(R), so there exists a
y ∈ R such that y − e = ey. Multiplying on the left by −e gives −ey + e = −ey,
and thus e = 0.
Lemma 1.1.8. Let R be a Z-graded ring. Suppose that R contains a set of local
units E such that each element of E is homogeneous. Then J(R) is a graded ideal.
CHAPTER 1. PRELIMINARIES 6
Proof. Let x ∈ J(R) and decompose x = xn1 +· · ·+xnt into a sum of its homogeneous
components. Let e be an element of E such that exe = x. Then x = exe =
exn1 e + · · · + exnt e. Since e is a local unit it must be an idempotent, and therefore
e has degree 0. Thus exni e is homogeneous with the same degree as xni . Since the
decomposition of an element into homogeneous components is unique, we must have
exni e = xni for each i ∈ {1, . . . , t}.
A ring R is said to be von Neumann regular if, for every a ∈ R, there exists an
x ∈ R for which a = axa. Furthermore, we say that x is a von Neumann regular
inverse or quasi-inverse for a. Note that any division ring is von Neumann regular,
since we can simply choose x = a−1 if a is nonzero. The question of which Leavitt
path algebras are von Neumann regular (as well as other definitions of ‘regular’) will
be visited in Section 4.2. The following lemma concerning von Neumann regular
rings is from [G1, Lemma 1.3].
Lemma 1.1.9. Let R be a ring and let J and K be two two-sided ideals in R with
J ⊆ K. Then K is von Neumann regular if and only if J and K/J are von Neumann
regular.
Proof. If K is von Neumann regular, then clearly K/J is von Neumann regular.
Now consider a ∈ J. Since J ⊆ K, there exists x ∈ K such that a = axa. Now
y = xax ∈ J (since J is a two-sided ideal) and aya = axaxa = axa = a. Thus J is
von Neumann regular.
Now suppose that K/J and J are both von Neumann regular and consider a ∈ K.
Since K/J is von Neumann regular there exists x ∈ K for which a + J = axa + J,
CHAPTER 1. PRELIMINARIES 7
We conclude this section with a useful result regarding the matrix ring Mn (K),
where K is a field.
Lemma 1.1.10. Let K be a field. Then Mn (K), the ring of n × n matrices over
K, is simple for all n ∈ N.
Proof. Let n ∈ N, let J be a nonzero two-sided ideal of Mn (K) and let A = (aij ) ∈ J.
If we can show that the n × n identity matrix In is in hAi (the two-sided ideal
generated by A) then we have Mn (K) = hAi = J, proving that Mn (K) is simple.
Let Eij be the matrix unit with 1 in the (i, j) position and zeros elsewhere. Choose
i, j ∈ {1, . . . , n} such that aij 6= 0. Then aij E11 = Ei1 A E1j ∈ hAi. Since K is
a field, we have (a−1
ij E11 )(aij E11 ) = E11 ∈ hAi. By similar arguments, we have
E22 , . . . , Enn ∈ hAi and thus In = E11 + E22 + · · · + Enn ∈ hAi, as required.
(ii) (r1 + r2 )m = r1 m + r2 m,
and all n ∈ N . Note that if we view a ring R as the additive abelian group (R, +),
then R can be seen as a left module over itself, with module multiplication given
by multiplication in the ring R. Furthermore, the submodules of R R are simply the
left ideals of R.
We denote by R-mod the category of all left R-modules together with all R-
homomorphisms f : M → N , where M and N are left R-modules. (See Section
1.3 for a formal definition of a category.) However, in this thesis we will concern
ourselves with a slightly more restricted category of R-modules. We define an R-
module M to be unital if
( n
)
X
RM := ri mi : ri ∈ R, mi ∈ M = M,
i=1
R-mod.
Lemma 1.2.1. Let R be a ring with local units and let M be a unital left R-module.
Then
Now let R and S be two rings. Suppose that M is a left R-module and a right
S-module, with the property that (rm)s = r(ms) for all r ∈ R, s ∈ S and m ∈ M .
Then we say that M is an R-S-bimodule, and we sometimes denote M by R MS .
Furthermore, if M and N are two R-S-bimodules, then a map f : M → N is a
bimodule homomorphism if it is both a homomorphism of left R-modules and
right S-modules.
The following lemma gives a useful way of visualising the subring eRe of a ring R,
where e is an idempotent. Recall that EndR (Re) is the ring of all R-homomorphisms
from the left R-module Re to itself.
Lemma 1.2.2. Let R be a ring and let e be an idempotent in R. Then EndR (Re) ∼
=
(eRe)Op , where (eRe)Op is the opposite ring of eRe with multiplication · defined by
(er1 e) · (er2 e) = (er2 e)(er1 e) for all r1 , r2 ∈ R. Similarly, EndR (eR) ∼
= eRe.
CHAPTER 1. PRELIMINARIES 10
as required. To check that φ is multiplicative, note that we must check that φ(f g) =
φ(f ) · φ(g) = φ(g)φ(f ). Now (f g)(e) = f (rg e) = f (rg e2 ) = rg ef (e) = rg erf e, and so
We now turn our attention to direct products and direct sums. Recall that if
R is a ring and {Ai : i ∈ I} is a family of left R-modules, we define the direct
product of the family {Ai : i ∈ I} to be the R-module formed by taking the
Q
cartesian product of the family and denote this by i∈I Ai . Furthermore, we define
the external direct sum of the family to be
( )
M Y
Ai := (ai )i∈I ∈ Ai : ai 6= 0Ai for only a finite number of indices i ∈ I .
i∈I i∈I
CHAPTER 1. PRELIMINARIES 11
Q L
Note that if I is a finite index set then we have i∈I Ai = i∈I Ai .
for all i ∈ I. We can also show that any internal direct sum can be regarded as an
external direct sum, and vice versa, and hence there is no ambiguity in the notation.
The following result concerns left ideals generated by idempotents, and is useful
when working with rings with local units (though it is valid for any ring).
Proposition 1.2.4. Let R be a ring and let I be an index set. For any family of
left R-modules {Ai : i ∈ I} and any left R-module B, we have a group isomorphism
!
Ai , B ∼
M Y
Hom = Hom(Ai , B)
i∈I i∈I
L
Proof. For each j ∈ I we define πj : i∈I Ai → Aj to be the natural projection map
L
(ai )i∈I 7→ aj , and φj : Aj → i∈I Ai to be the natural injection map aj 7→ (bi )i∈I ,
L
where bj = aj and bi = 0 for i 6= j. Let f ∈ Hom( i∈I Ai , B). Then, for all i ∈ I
L Q
we have f φi ∈ Hom(Ai , B). Define τ : Hom( i∈I Ai , B) → i∈I Hom(Ai , B) by
τ (f ) = (f φi )i∈I . It is easy to show that τ is a group homomorphism.
L
Suppose that τ (f ) = 0 for some f ∈ Hom( i∈I Ai , B), so that f φi = 0 for all
L P
i ∈ I. Then, given any (ai )i∈I ∈ i∈I Ai , we have f ((ai )i∈I ) = f ( i∈I φi (ai )) =
P
i∈I f φi (ai ) = 0 and so f = 0. Thus τ is a monomorphism. Now let g = (gi )i∈I ∈
CHAPTER 1. PRELIMINARIES 13
Q
Hom(Ai , B), so that gi : Ai → B is a homomorphism for each i ∈ I. Define
i∈I
L P
f ∈ Hom( i∈I Ai , B) by f ((ai )i∈I ) = j∈I gj (πj ((ai )i∈I )). Now, for each j ∈ I we
have f φj (aj ) = gj (aj ) and so f φj = gj . Thus τ (f ) = g and so τ is an epimorphism,
completing the proof.
P ∼
=P ⊕B ∼
= (P ⊕ B) ⊕ B ∼
= P ⊕ B2 ∼
= ... ∼
= P ⊕ Bn.
The following result from [AGP, Theorem 1.6] gives a useful way of determining
when a unital ring is purely infinite. We state it here without proof.
Theorem 1.2.5. Let R be a simple unital ring. Then R is purely infinite if and
only if the following conditions are satisfied:
(ii) for every nonzero element x ∈ R, there exist elements s, t ∈ R such that
sxt = 1.
In Section 2.3 we will be examining purely infinite simple Leavitt path algebras.
As mentioned earlier, any Leavitt path algebra has local units but is not necessarily
unital, and so we will need to adapt Theorem 1.2.5 for the more general case in
which R has local units. This is not straightforward, however, and we will need to
use Morita equivalence (introduced in Section 1.3) to do so.
Furthermore, we have
Proof. Suppose that R has no infinite idempotents but S has an infinite idempotent
e. Then, by Proposition 1.2.6, there exists an idempotent f ∈ S and elements
x, y ∈ S such that e = xy, f = yx and f e = ef = f 6= e. Since these elements
are also in R, Proposition 1.2.6 also gives that e is an infinite idempotent of R, a
contradiction.
f g
0 /A /B /C / 0.
P
~
~~~ f
h
~
~ ~
B g /C /0
That is, gh = f .
Lemma 1.2.10. Let A and P be left R-modules, where P is projective, and let
f : A → P be an R-homomorphism. If f is an epimorphism, then there exists a left
R-module P 0 for which A ∼
= P ⊕ P 0.
P
~~
h
~~~ 1P
~ ~
A /P /0
f
g
0 /A /B
f
h
Q
That is, hg = f .
In the case that R is injective as a left module over itself, we say that R is left
self-injective. We will examine self-injective Leavitt path algebras in Section 4.4.
CHAPTER 1. PRELIMINARIES 17
g
0 /A /B
f
M
g
0 /A /B
f
M h̄
i
Q
A similar proof shows that the direct product of injective R-modules is also
injective. Furthermore, we can use similar arguments to show that any direct sum-
mand of a projective R-module is projective, and that the direct sum of projective
R-modules is projective.
We conclude this section with a series of results that generalise well-known results
for R-modules, where R is unital, to the more general case that R has local units.
This first result is from [ARM2, Proposition 2.2].
Proposition 1.2.13. Let R be a ring with local units. Then for any idempotent
e ∈ R, the left ideal Re is a projective module in the category R-Mod.
Proof. Since R has local units, it is easy to see that Re is unital and nondegenerate
CHAPTER 1. PRELIMINARIES 18
Re
f
B /C /0
g
⊗ / A ⊗B N
A × BH
HH
HH
HH
HH
HH
H
f HHH
f¯
HH
HH
H$
G
That is, f¯ ◦ ⊗ = f . It can be shown that such an abelian group A ⊗R B will always
exist (see, for example, [O, Section 2.2]). The group A⊗R B is generated by elements
of the form a ⊗R b, where a ∈ A, b ∈ B and a ⊗R b = ⊗((a, b)).
CHAPTER 1. PRELIMINARIES 19
It can be shown for any M that this sequence is always exact on the right-hand
side, and so to show that M is flat it suffices to show that any monomorphism
f : A → B gives rise to a monomorphism f ⊗ 1M : A ⊗ M → B ⊗ M .
We now give several results concerning flat modules over rings with local units.
We begin with the following lemma from [ARM2, Lemma 2.9].
Lemma 1.2.15. Let R be a ring with local units. For any M ∈ Mod-R, the map
µM : M ⊗ R → M given by µM ( ni=1 (mi ⊗ ri )) = ni=1 mi ri is an isomorphism of
P P
right R-modules.
Proof. First note that M ⊗R is indeed a right R-module, with module multiplication
given by ( ni=1 (mi ⊗ ri ))r = ni=1 (mi ⊗ (ri r)) for all ni=1 (mi ⊗ ri ) ∈ M ⊗ R and
P P P
epimorphism. Now suppose ni=1 mi ri = 0. Since R has local units, there exists
P
Corollary 1.2.16. Any ring R with local units is flat as a left R-module.
0 /A⊗R /B⊗R
f ⊗1R
The following result regarding flat modules is from Rotman [Ro, Theorem 3.60].
Though Rotman’s original result is for unital rings, the proof is the same for rings
with local units and so we omit it.
Proposition 1.2.17. Let R be a ring with local units, F a flat left R-module and
K a submodule of F . Then F/K is a flat left R-module if and only if K ∩ IF = IK
for every finitely generated right ideal I of R.
Recall that, for a unital ring R, an R-module is said to be free if it has a basis;
that is, a linearly independent generating set. The following definition, given in
[ARM2, Definition 2.11], extends this notion to rings with local units.
Definition 1.2.18. Let R be a ring with local units and let F be a left R-module.
Suppose there exists an index set I and sets B = {bi }i∈I ⊆ F and U = {ui }i∈I ⊆ R,
where each ui is an idempotent and bi = ui bi for all i ∈ I. We say F is a U -free left
R-module with U -basis B if, for all x ∈ F , there exists a unique family {ri }i∈I ⊆ R
(with only finitely many ri nonzero) such that ri = ri ui for each i ∈ I and
X
x= ri b i .
i∈I
CHAPTER 1. PRELIMINARIES 21
L
Note that, in particular, we have F = i∈I Rbi .
If R is a unital ring with identity 1, then taking ui = 1 for each i ∈ I reduces the
definition of a U -free left R-module to the familiar definition of a free left R-module.
We can expand the following result from [Ro, Theorem 3.62] to the more general
case involving local units and U -free modules, applying Proposition 1.2.17 in place
of [Ro, Theorem 3.60]. Again, we will omit the proof.
Theorem 1.2.19. Let R be a ring with local units and let F be a U -free left R-
module. Then, for any submodule S of F , the following statements are equivalent:
We conclude this section with the following proposition from [ARM2, Proposition
2.17], which generalises the well-known result for unital rings that any R-module is
the epimorphic image of a free R-module.
Proposition 1.2.20. If R is a ring with local units then every module M ∈ R-Mod
is the epimorphic image of a U -free left R-module.
Proof. Let M ∈ R-Mod. By Lemma 1.2.1, for every m ∈ M there exists an idem-
potent em ∈ R such that m = em m. Since M is unital, for any x ∈ M we have
P P
x = m∈M rm m = m∈M rm em m, where only finitely many rm are nonzero. Let
L P
φ : m∈M Rem → M be the map (rm em )m∈M 7→ m∈M rm em m, which, by the
nonzero), then taking rm = sm em we have a unique family {rm }m∈M ⊆ R such that
P L
x = m∈M rm bm and rm em = rm . Thus m∈M Rem is a U -free left R-module.
Morita equivalence is a fairly deep and complex field of theory. Here we give
enough background so that the basic concepts can be understood and we have
sufficient tools to apply these concepts to relevant areas; however, some results will
be stated without proof, as they require a large amount of background theory that
would take us far outside the bounds of this thesis. We begin by looking at category
theory.
Definition 1.3.1. A category C is made up of two sets: Obj(C), the set of objects
in C, and Mor(C), the set of morphisms between objects in C. If A, B ∈ Obj(C),
we let Mor(A, B) denote the set of morphisms from A to B. Furthermore, if f ∈
Mor(A, B), we can denote this by the usual function notation f : A → B.
In Section 1.2 we introduced the category R-Mod. In light of the above definition,
we can see that the objects of R-Mod are the unital, nondegenerate left R-modules,
the morphisms are R-homomorphisms between such modules, and the operation ◦
is function composition.
(i) for all A, B, C ∈ Obj(C) and all f ∈ Mor(A, B) and g ∈ Mor(B, C) we have
F(g ◦ f ) = F(g) ◦ F(f ), and
We now move on to a concept that allows us to say when two functors are
‘equivalent’ in some way.
Definition 1.3.4. Let C and D be two categories, and let F and G be two covariant
functors from C to D. A natural transformation η from F to G (denoted η : F →
G) associates to each A ∈ Obj(C) a morphism ηA : F(A) → G(A) in D, such that
for every f ∈ Mor(A, B) (where B ∈ Obj(C)) we have ηB ◦ F(f ) = G(f ) ◦ ηA . In
other words,
F (f )
F(A) / F(B)
ηA ηB
G(A) / G(B)
G(f )
Definition 1.3.5. Let R and S be rings. We say that the categories R-Mod and
S-Mod are equivalent if there exist functors F : R-Mod → S-Mod and G : S-Mod
→ R-Mod such that
Furthermore, if R-Mod and S-Mod are equivalent then we say that R is Morita
equivalent to S.
Definition 1.3.6. Let R and S be two rings, let R NS and S MR be two bimodules
and let (−, −) : N × M → R and [−, −] : M × N → S be two maps. Furthermore,
suppose we have two maps φ : N ⊗S M → R and ϕ : M ⊗R N → S given by
Theorem 1.3.7. Let R and S be two idempotent rings. Then R and S are Morita
equivalent if and only if there exists a surjective Morita context (R, S, N, M, φ, ϕ).
We owe the following results to Ánh and Márki, whose research examining Morita
equivalence for non-unital rings is invaluable, as we will require many of these results
to be valid for rings that do not necessarily have identity. The first proposition is
from [AM, Proposition 3.3].
CHAPTER 1. PRELIMINARIES 26
Proposition 1.3.8. Let R and S be two Morita equivalent rings with local units.
Then the lattice of ideals of R is isomorphic to the lattice of ideals of S; in particular,
R is simple if and only if S is simple.
Proposition 1.3.9. Let R be a ring with local units. If there exists an idempotent
e ∈ R for which R = ReR, then R is Morita equivalent to the subring eRe.
We now establish some definitions and results that are useful in the context of
Morita equivalence.
Definition 1.3.11. Let R be a ring and let P be a right R-module. The trace of
P , denoted tr(P ), is defined by
X
tr(P ) = {x ∈ R : x = g(p) for some p ∈ R and some g ∈ HomR (P, R)}.
It can be shown that tr(P ) is a two-sided ideal of R (see, for example, [L2, Propo-
sition 2.40]).
We now look at two results that allow us to determine when a right R-module P
is a generator for Mod-R. The following result has been established for unital rings
(see for example [L2, Theorem 18.8]). Here we extend it to rings with local units by
adapting part of the proof of [AA2, Proposition 10], (i) ⇐⇒ (ii).
CHAPTER 1. PRELIMINARIES 27
Proposition 1.3.12. Let R be a ring with local units and let P be a right R-module.
If tr(P ) = R then P is a generator for Mod-R.
Proof. Let E = {ei : i ∈ I} be a set of local units for R. If tr(P ) = R, then for each
Pis(i)
ei ∈ E we can write ei = t=i g (pt ) for some pt ∈ P and some gt ∈ HomR (P, R). If
1 t
we define λei : R → ei R by λei (r) = ei r, then letting Ji = {i1 , . . . , is(i) } we have that
λei ◦ t∈Ji gt : P (Ji ) → R → ei R is an epimorphism. To see this, take an arbitrary
L
ei r ∈ ei R. Then
P L
ei r = λei (ei r) = (λei (ei ))r = λei t∈Ji gt (pt ) r = λei ◦ t∈Ji gt (pt r)t∈Ji ,
L
and so λei ◦ gt is indeed an epimorphism. Let J be the disjoint union of the
t∈Ji
sets Ji and define ϕ : P (J) → R by ϕ|P (Ji ) = λei ◦ t∈Ji gt . Since any element r ∈ R
L
Proposition 1.3.12 leads to the following lemma, which has also been adapted
from the proof of [AA2, Proposition 10], (i) ⇐⇒ (ii).
Lemma 1.3.13. Let R be a ring with local units and let P be a nonzero, finitely
generated projective right R-module. If R is simple then P is a generator for Mod-R.
phism and P is projective, there must exist P 0 ∈ Mod-R for which Rn ∼ = P ⊕ P 0 (by
Lemma 1.2.10).
Thus P is isomorphic to a direct summand of Rn (since, if θ : P ⊕ P 0 → Rn
is an isomorphism, then Rn = θ(P ) ⊕ θ(P 0 ) ) and so HomR (P, Rn ) 6= 0. However,
since HomR (P, Rn ) ∼
= (HomR (P, R))n (by the right R-module analogue of Propo-
sition 1.2.4), we have (HomR (P, R))n 6= 0 and so HomR (P, R) 6= 0. Thus tr(P )
is nonzero and so, since tr(P ) is a two-sided ideal of R and R is simple, we have
tr(P ) = R. Thus, by Proposition 1.3.12, P is a generator for Mod-R.
CHAPTER 1. PRELIMINARIES 28
Proposition 1.3.15. Let R and S be two rings with local units. Then R is Morita
equivalent to S if and only if there is a locally projective generator PR = −
lim
→ i∈I Pi
in Mod-R for which S ∼ = lim
−→ i∈I End(Pi ).
This second proposition is from Lam [L2, Proposition 18.44]. While the result
is given in a unital context, the proof is valid for any ring. Here we state it without
proof.
Note that if R and S are two Morita equivalent rings and PR and PS are pro-
generators for R and S, respectively, then combining Propositions 1.3.8, 1.3.15 and
1.3.16 (and viewing R, S, End(PR ) and End(PS ) as right modules over themselves)
we have that the lattices of submodules of R, S, End(PR ), End(PS ), PR and PS are
all isomorphic.
CHAPTER 1. PRELIMINARIES 29
We now come to the main result of this section, the proof of which has been
expanded from [AA2, Proposition 10], (i) ⇐⇒ (ii).
Theorem 1.3.17. Let R and S be two Morita equivalent rings with local units.
Then R is purely infinite simple if and only if S is purely infinite simple; that is,
the property ‘purely infinite simple’ is Morita invariant.
Proof. Suppose that R is purely infinite simple and let P be a nonzero, finitely-
generated projective right R-module. We know that P is a generator for Mod-R
by Lemma 1.3.13. Since R is purely infinite, it must contain an infinite idempotent
e such that the right ideal eR is directly infinite, so that there exists a nonzero
submodule B of R such that
eR ∼
= B ⊕ eR ∼
= ··· ∼
= B m ⊕ eR
eR ∼
= B n ⊕ eR ∼
= P ⊕ C ⊕ eR = P ⊕ Q.
Proposition 1.3.18. Let R be a ring with local units. Then R is purely infinite
simple if and only if the subring eRe is purely infinite simple for every nonzero
idempotent e ∈ R.
Proof. Suppose that R is purely infinite simple. Then, for every nonzero idempotent
e ∈ R we have R = ReR (by the simplicity of R) and so, by Proposition 1.3.9, R
is Morita equivalent to eRe. Thus, since the property ‘purely infinite simple’ is
a Morita invariant of R (by Theorem 1.3.17), eRe must be purely infinite simple.
Conversely, if eRe is purely infinite simple for every nonzero idempotent e ∈ R then
CHAPTER 1. PRELIMINARIES 31
We are now finally in a position to adapt Theorem 1.2.5 to the more general case
in which R has local units. Note the subtle difference in condition (ii).
Theorem 1.3.19. Let R be a simple ring with local units. Then R is purely infinite
if and only if the following conditions are satisfied:
(ii) for every pair of nonzero elements x, y ∈ R, there exist elements s, t ∈ R such
that sxt = y.
Now suppose that conditions (i) and (ii) hold. Let I be a nonzero ideal of R
and let x be a nonzero element of I. Then, by condition (ii), for any y ∈ R there
exist a, b ∈ R such that y = axb ∈ I, and so R must be simple. Now let f be a
nonzero idempotent of R (such an element must exist since R has local units). Since
R is simple we have Rf R = R, and so R is Morita equivalent to the subring f Rf
by Proposition 1.3.9. Thus the lattice of ideals of R is isomorphic to the lattice of
ideals of f Rf (by Proposition 1.3.8) and so it follows from condition (i) that f Rf
is not a division ring.
Now take an element x ∈ f Rf . Applying condition (ii), we can find s0 , t0 ∈ R
such that s0 xt0 = f . Let s = f s0 f and t = f t0 f . Then, noting that x = f xf ,
we have sxt = (f s0 f )x(f t0 f ) = f s0 (f xf )t0 f = f (s0 xt0 )f = f (f )f = f . Since f is
CHAPTER 1. PRELIMINARIES 32
The equivalence given in Theorem 1.3.19 will prove useful when we come to
determine precisely which Leavitt path algebras are purely infinite simple in Section
2.3.
Since we will be dealing exclusively with directed graphs in this thesis, we will
henceforth refer to them as simply ‘graphs’.
From Definition 1.4.1 it follows that, for any vertex v in E 0 , s−1 (v) is the set of
all edges emitted by v, while r−1 (v) is the set of all edges received by v. If v does
not emit any edges, so that s−1 (v) = ∅, then v is called a sink. If v does not receive
any edges, it is called a source. Referring to the graph E in Example 1.4.2, we can
see that v0 is a source, while v2 , and v3 are sinks.
where (∞) denotes an infinite number of edges from u to v (so that u is an infinite
emitter). Many texts assume that a given graph E is row-finite, or even that E
contains no singular vertices at all. However, in this thesis we will not be making
any such assumptions unless stated otherwise.
A path p is said to be a cycle if s(p) = r(p) and s(ei ) 6= s(ej ) for all i 6= j. In
other words, a cycle is a path that begins and ends on the same vertex and does not
pass through any vertex more than once. If c is a cycle with s(c) = r(c) = v, then
we say that c is based at v. If a graph E does not contain any cycles, it is said to
be acyclic.
•g
e4 e3
f g
•u / •v •H / •w
e1 e2
(
•
Then p = f e1 e2 g is a path in E ∗ with s(p) = u, r(p) = w and l(p) = 4. If we let
q = f e1 , then q is an initial subpath of p. Furthermore, c = e1 e2 e3 e4 is a cycle in E
based at v, and g is an exit for c.
For a vertex v ∈ E 0 , we define the tree of v, denoted T (v), to be the set of all
vertices in E 0 to which v connects; that is
T (v) = {w ∈ E 0 : v ≥ w}.
Note that we always have v ∈ T (v) since all vertices connect to themselves, by
definition. We can extend the definition of a tree to an arbitrary subset X of E 0
S
by defining T (X) = v∈X T (v). Since v ∈ T (v) for each v ∈ X we therefore have
X ⊆ T (X).
CHAPTER 1. PRELIMINARIES 35
6 •u h
•v 1 / •v 2 / •v3 / •v4
E
(
•w
Then, for example, we have T (v1 ) = E 0 , since there is a path from v1 to every vertex
in E, while T (w) = {w, v4 , u}. The only bifurcation in E is v2 . Furthermore, the
line points in E are u, w, v3 and v4 ; that is, Pl (E) = {u, w, v3 , v4 }.
Definition 1.4.6. We denote by E ∞ the set of all paths of infinite length in E, and
we denote by E ≤∞ the set E ∞ together with the set of all finite paths in E whose
end vertex is a sink. A vertex v is cofinal if, for every path p ∈ E ≤∞ , there exists
a vertex w in p such that v ≥ w. Furthermore, we say that a graph E is cofinal if
all of its vertices are cofinal.
Now we define two concepts that will feature heavily in our study of Leavitt path
algebras.
In other words, a subset H is hereditary if, for each v ∈ H, every vertex that
v connects to is also in H. Furthermore, a subset H is saturated if every regular
CHAPTER 1. PRELIMINARIES 36
vertex that feeds into H, and only into H, is also in H. In the study of Leavitt path
algebras we will be particularly interested in subsets of E 0 that are both hereditary
and saturated, which we call simply ‘hereditary saturated subsets’ of E 0 . Note that
if a vertex v is a line point then any vertex w ∈ T (v) must be a line point, since
T (w) ⊆ T (v). Thus Pl (E) is a hereditary subset of E 0 – however, it is not necessarily
saturated.
6 •u h
•v 1 / •v 2 / •v3 / •v4
E
(
•w
We can see that S = {v1 , v2 } forms a saturated (but not hereditary) subset of E 0 .
Furthermore, H = {u, w, v3 , v4 } forms a hereditary subset of E 0 . Indeed, this is the
set of line points of E, which is always hereditary, as noted above. However, H is
not saturated, since {r(e) : s(e) = v2 } = {u, w, v3 } ⊂ H but v2 ∈
/ H. It is easy to
see that the hereditary saturated closure of H must contain v2 , and therefore must
also contain v1 . Thus H̄ = E 0 .
Since u is the only sink in E, and E is finite, E ≤∞ is the set of all paths in E ∗
that end in u. Since every vertex in E 0 connects to u, every vertex is cofinal and
thus E is cofinal.
Lemma 1.4.9. Let E be a graph and let X be a subset of E 0 . Then the hereditary
saturated closure of X is the set ∞
S
n=0 Gn (X), where
Gn (X) = {v ∈ E 0 : 0 < |s−1 (v)| < ∞ and r(s−1 (v)) ⊆ Gn−1 (X)} ∪ Gn−1 (X).
CHAPTER 1. PRELIMINARIES 37
Proof. First, note that Gm (X) ⊆ Gn (X) for each m ≤ n. For ease of notation,
we set G(X) = ∞
S
n=0 Gn (X). Now X ⊆ T (X) = G0 (X) ⊆ G(X), and so G(X)
contains X. To show that G(X) is hereditary, suppose that v ∈ G(X) and let
w ∈ T (v). Furthermore, let p = e1 . . . el be a path with s(p) = v and r(p) = w,
and let n be the minimum integer for which v ∈ Gn (X). If n = 0, then v ∈ T (X)
and so w ∈ T (X) ⊆ G(X) and we are done. If n 6= 0, then by definition we
have that 0 < |s−1 (v)| < ∞ and r(s−1 (v)) ⊆ Gn−1 (X). In particular, we have
r(e1 ) ∈ Gn−1 (X). Now let m be the minimum integer for which r(e1 ) ∈ Gm (X)
(noting that m ≤ n − 1). If m = 0, then again w ∈ T (X) and we are done;
otherwise we have r(e2 ) ∈ Gm−1 (X) by the same logic as above. Thus repeating
this argument either yields that w ∈ T (X) or r(el ) = w ∈ Gp (X) for some p < n.
In either case, w ∈ G(X) and so G(X) is hereditary.
To show that G(X) is saturated, suppose we have a regular vertex v ∈ E 0 such
that r(s−1 (v)) ⊆ G(X). Let n be the minimum integer for which r(s−1 (v)) ⊆ Gn (X).
Then by definition we have v ∈ Gn+1 (X) ⊆ G(X) and so G(X) is saturated.
Finally, suppose that H is any hereditary saturated subset containing X. Since
H is hereditary, it must contain T (X), so that T (X) = G0 (X) ⊆ H. Furthermore,
since H is saturated, H must contain the set S1 = {v ∈ E 0 : 0 < |s−1 (v)| <
∞ and r(s−1 (v)) ⊆ T (X)}, and so G1 (X) = S1 ∪ T (X) ⊆ H. Continuing this
argument, we see that Gn (X) ⊆ H for each non-negative integer n, and so G(X) =
S∞
n=0 Gn (X) ⊆ H. Therefore G(X) is the hereditary saturated closure of X, as
required.
Proof. Suppose that the only hereditary saturated subsets of E 0 are ∅ and E 0 . Let
v ∈ E 0 and p ∈ E ≤∞ . To show that E is cofinal, it suffices to show that we can
find a vertex w ∈ p0 for which w ∈ T (v). Let X = {v}. Then X̄ 6= ∅, and so
X̄ = E 0 = ∞
S
n=0 Gn (X) (where each Gn (X) is as defined in Lemma 1.4.9). Let
Now suppose that conditions (i) and (ii) hold and that there exists a hereditary
saturated subset H of E 0 such that ∅ ⊂ H ⊂ E 0 . Choose a vertex v such that
v ∈ E 0 \H. Now v cannot be a singular vertex, because condition (ii) would imply
that w ≥ v for any w ∈ H, and therefore that v ∈ H by the hereditary nature of H.
In particular, v is not a sink and so s−1 (v) 6= ∅. Furthermore, r(s−1 (v)) * H, for
otherwise we would have v ∈ H by the saturated property of H. Thus there exists
an edge e1 ∈ s−1 (v) for which r(e1 ) ∈
/ H. Again, r(e1 ) cannot be a singular vertex,
so we can repeat the above procedure to find an edge e2 for which s(e2 ) = r(e1 )
and r(e2 ) ∈
/ H, and so on. Thus we can form an infinite path p = e1 e2 . . . with
p0 ∩ H = ∅. We know that p is infinite since each vertex in p0 is not in H and
therefore cannot be a sink (by the argument used above). Thus p ∈ E ∞ . However,
since E is cofinal, for any w ∈ H there exists a vertex u ∈ p0 such that w ≥ u, and
so u ∈ H, a contradiction. Thus the only hereditary saturated subsets of E 0 are ∅
and E 0 , as required.
Chapter 2
39
CHAPTER 2. LEAVITT PATH ALGEBRAS 40
As we will see, the relations (A1) and (A2) defined on A(E) essentially preserve
the path structure of the associated graph E, hence the name ‘path algebra’. In
order to extend this concept to a Leavitt path algebra, we need to introduce the
following concept.
for all e ∈ E 1 . For ease of notation, we usually denote the functions r0 and s0 as
simply r and s.
Definition 2.1.3. Let K be a field and let E be an arbitrary graph. The Leavitt
path algebra of E with coefficients in K, denoted LK (E), is defined to be the
K-algebra generated by the sets E 0 , E 1 and (E 1 )∗ , i.e. K[E 0 ∪ E 1 ∪ (E 1 )∗ ], subject
to the following relations:
In other words, the Leavitt path algebra of a graph E is the path K-algebra over
the extended graph E,
b subject to the relations (CK1) and (CK2), which are known
as the Cuntz-Krieger relations. Note that, by the (A1) relation, each v ∈ E 0 is
an idempotent in LK (E) and the elements of E 0 are mutually orthogonal in LK (E).
Thus the vertices of E form a set of orthogonal idempotents in LK (E).
We now give several examples of Leavitt path algebras. From this point we will
always use K to denote an arbitrary field.
Example 2.1.4. The simplest possible example is the graph I1 consisting of a single
vertex v and no edges:
•v
In this case we have simply LK (I1 ) = Kv, which is isomorphic to the ring K.
Similarly, if we add an extra vertex w to obtain the graph I1 × I1 :
•v •w
Things get more interesting if we add an edge e between v and w to form the
graph M2 :
•v
e / •w
where Eij is the matrix unit with 1 in the (i, j) position and zeros elsewhere. We
extend φ linearly and multiplicatively. Since any element in M2 (K) is a K-linear
combination of the four matrix units listed above, φ is clearly an epimorphism.
Furthermore, it is easy to see that φ is a monomorphism since these matrix units
CHAPTER 2. LEAVITT PATH ALGEBRAS 42
Using the general matrix unit property that Eij Ekl = δjk Eil , it is easy to see
that φ preserves the (A1), (A2) and (CK1) relations. For example, to check that
the equality e = er(e) is preserved by φ we must check that φ(e) = φ(er(e)) for all
e ∈ E 1 , which in this case reduces to showing
as required. To check that the (CK2) relation is preserved, recall that the relation
is only defined at regular vertices. Thus we only need to check that the equality
v = ee∗ is preserved by φ, which is easily seen since
Example 2.1.5. We can generalise the above example by defining the finite line
graph with n vertices, denoted Mn , to be the graph
e1 e2 en−1
•v1 / •v2 / •v 3 •vn−1 / •v n
Example 2.1.6. We define the single loop graph, denoted R1 , to be the graph
(
e •v
We can extend the single loop graph to the rose with n leaves graph, denoted
Rn :
e3
e2
(
e1 3 •v
en
For each n ∈ N, we have that LK (Rn ) is isomorphic to the Leavitt algebra L(1, n),
which is the unital K-algebra generated by elements {xi , yi : i = 1, . . . , n} and
subject to the following relations:
Example 2.1.7. We define the infinite clock graph, denoted C∞ , to be the graph
a •O v1 v2
•
w;
www
w
ww
ww
•u GG / •v3
GG
(∞) GG
GG
G#
v4
•
of LK (C∞ ) as follows:
Note that the map here is similar to the mapping from LK (M2 ) → M2 (K) in
Example 2.1.4. Indeed, it is as if we have an infinite number of copies of the graph
M2 emanating from a single central vertex u. Thus, in a similar fashion to that
example it is easy to see that the Leavitt path algebra relations are preserved by φ.
As an example, we check that φ(uei ) = φ(ei ) for an arbitrary edge ei :
as required. Note that we do not need to check the (CK2) relation as there are no
regular vertices in C∞ . Finally, it is clear that φ is an isomorphism, as required.
From the four defining Leavitt path algebra relations we can deduce the product
of two arbitrary generating elements in LK (E). For example, by applying relations
CHAPTER 2. LEAVITT PATH ALGEBRAS 45
(A1) and (A2), we can deduce the product of two arbitrary edges ei , ej ∈ E 1 :
e∗i e∗j = e∗i s(ei )r(ej )e∗j = δs(ei ),r(ej ) e∗i e∗j .
Thus the product of two edges ei and ej is nonzero if and only if ei and ej are adjacent
in the graph E. Extending this to an arbitrary number of edges e1 , e2 , . . . en ∈ E 1 ,
we can see that the product e1 e2 . . . en is nonzero if and only if e1 e2 . . . en is a path
in E (and similarly the product e∗n . . . e∗2 e∗1 is nonzero if and only if e∗n . . . e∗2 e∗1 is a
ghost path in E).
The relations (A1) and (A2) give similar results when multiplying an arbitrary
vertex v ∈ E 0 with an arbitrary edge e ∈ E 1 :
Thus the product of a vertex by an edge is nonzero only when the vertex is the
source of that edge, and the product of an edge by a vertex is nonzero only when
the vertex is the range of that edge. Essentially the relations (A1) and (A2) can
be seen as preserving the path structure of the graph E, as mentioned earlier. The
following lemma from [AA1, Lemma 1.5] solidifies this concept. The proof here
follows the same argument as the proof of [Rae, Corollary 1.15].
(ii) kei1 . . . eim e∗jn . . . e∗j1 , where k ∈ K, ei1 , . . . , eim , ej1 , . . . , ejn ∈ E 1 and m, n ≥ 0,
m + n ≥ 1,
CHAPTER 2. LEAVITT PATH ALGEBRAS 46
so that p and q are either paths of length 0 at the vertex vi , or p = ei1 . . . eim , q =
ej1 . . . ejn and at least one of p and q has length greater than 0.
From Lemma 2.1.8 we know that every monomial in LK (E) is of the form kpq ∗ ,
where p and q are paths in E. But what happens when we form the product p∗ q?
The following lemma gives a useful result concerning such products.
Lemma 2.1.10. Let E be an arbitrary graph and let p and q be two paths in E.
(i) If p and q have the same length, then in LK (E) we have p∗ q = δp,q r(p).
Proof. (i) Let p = ei1 . . . ein and q = ej1 . . . ejn , where each eik , ejk ∈ E 1 . By the
(CK1) relation we have that e∗ik ejk = δeik ,ejk r(eik ) for each k ∈ {1, . . . , n}. Also note
that r(eik ) = s(eik+1 ) and so the (A2) relation gives r(eik )eik+1 = eik+1 . Thus we
have
p∗ q = e∗in . . . e∗i1 ej1 . . . ejn
= (δei1 ,ej1 )e∗in . . . e∗i2 ej2 . . . ejn
..
.
= (δei1 ,ej1 . . . δein−1 ,ejn−1 )e∗in ejn
= (δei1 ,ej1 . . . δein−1 ,ejn−1 δein ,ejn )r(ejn )
Thus, if eik 6= ejk for any k ∈ {1, . . . , n}, so that p 6= q, then the above equation
gives p∗ q = 0. Otherwise, if p = q we have p∗ q = r(ejn ) = r(p), as required.
Recall the definition of a Z-graded ring from Section 1.1. If we equate degree in
LK (E) with path length in E, it is natural to think of edges as elements of degree 1,
ghost edges as elements of degree −1 and vertices as elements of zero degree. As it
turns out, this intuitive grading does indeed fulfil the requirements for a Z-grading
on LK (E), as the following lemma from [AA1, Lemma 1.7] shows.
Case 1: l(q1 ) = l(p2 ). Then q1∗ p2 = r(p2 ) and so xy = p1 q2∗ . Since l(p1 ) − l(q2 ) =
(m + l(q1 )) − (l(p2 ) − n) = m + n, we have that xy ∈ LK (E)m+n .
Case 2: l(q1 ) > l(p2 ). Then q1 = p2 q for some subpath q of q1 , and so xy = p1 q ∗ q2∗ .
Since l(p1 ) − l(q2 q) = l(p1 ) − (l(q2 ) + l(q)) = l(p1 ) − (l(q2 ) + l(q1 ) − l(p2 )) = (l(p1 ) −
l(q1 )) + (l(p2 ) − l(q2 )) = m + n, we again have that xy ∈ LK (E)m+n .
Case 3: l(p2 ) > l(q1 ). Then a similar argument to Case 2 gives xy ∈ LK (E)m+n .
Pr ∗
Ps ∗
Finally, if x = i=1 p1i q1i ∈ LK (E)m and y = j=1 p2j q2j ∈ LK (E)n , then
from the argument above it is clear that xy ∈ LK (E)m+n . Thus LK (E)m LK (E)n ⊆
LK (E)m+n , as required.
Finally, it is natural to ask under what conditions LK (E) is unital and, more
generally, under what conditions LK (E) has local units. We close this section with
CHAPTER 2. LEAVITT PATH ALGEBRAS 50
Proof. (i) Suppose E 0 is finite and consider an arbitrary monomial kpq ∗ ∈ LK (E),
where p, q ∈ E ∗ and k ∈ K. Let α = vi ∈E 0 vi . Then
P
! !
X X
α(kpq ∗ ) = vi kpq ∗ = k δvi ,s(p) s(p) pq ∗ = ks(p)pq ∗ = kpq ∗ .
vi ∈E 0 vi ∈E 0
Similarly, we can show that (kpq ∗ )α = kpq ∗ . Since any element in LK (E) is a sum
of such monomials, we must have that αx = x = xα for all x ∈ LK (E). Thus α is
an identity for LK (E).
define
t
[
V = {s(pij ), s(qji ) : j = 1, . . . , s(i)}
i=1
P
and let β = v∈V v. Then, using the same arguments as in (i), it is easy to see that
βai = ai = ai β for each ai ∈ X. Since β is a finite sum and an idempotent, it is a
local unit for X. Thus E 0 generates a set of local units for LK (E).
Lemma 2.2.1. Let E be an arbitrary graph and let J be an ideal of LK (E). The
set of all vertices contained in J, i.e. J ∩ E 0 , forms a hereditary saturated subset of
E 0.
Proof. Let H = J ∩ E 0 and let v ∈ H and w ∈ T (v). Then there exists a path
p ∈ E ∗ with s(p) = v and r(p) = w. By Lemma 2.1.10, we have w = p∗ p = p∗ vp ∈ J
and so H is hereditary.
Now suppose that w is a regular vertex in E 0 such that for all e ∈ E 1 with
s(e) = w we have r(e) ∈ H. Then the (CK2) relation gives
X X
w= ee∗ = er(e)e∗ ∈ J,
s(e)=w s(e)=w
and so H is saturated.
The fact that any Leavitt path algebra has local units leads to the following
useful lemma.
Lemma 2.2.3. Let E be an arbitrary graph and let I be an ideal of LK (E). Then
I ∩ E 0 = E 0 if and only if I = LK (E).
CHAPTER 2. LEAVITT PATH ALGEBRAS 52
Since LK (E) has local units for any graph E, we can apply many of the results in
Section 1.2 to the category LK (E)-Mod. We give a few examples of such applications
here.
Lemma 2.2.4. Let E be an arbitrary graph. The Leavitt path algebra LK (E) is a
projective module in the category LK (E)-Mod.
idempotent and LK (E) has local units, we can apply Proposition 1.2.13 to obtain
that each summand LK (E)v is projective in LK (E)-Mod. Thus, since the direct
sum of projective modules is also projective, LK (E) is projective.
Lemma 2.2.4 tells us that every Leavitt path algebra is projective as a left module
over itself (and we can show similarly that every Leavitt path algebra is projective
as a right module over itself). However, the same is not true for injectivity; that is,
not all Leavitt path algebras are left or right self-injective. In Section 4.4 we will
examine self-injective Leavitt path algebras in detail.
For an arbitrary graph E, Corollary 1.2.16 tells us that LK (E) is flat as a left
LK (E)-module. Furthermore, since LK (E) = v∈E 0 LK (E)v, then taking B = E 0
L
and U = E 0 in the definition of U -free module we can see that every Leavitt path
algebra LK (E) is an E 0 -free left LK (E)-module with basis E 0 .
Now we briefly return to graph theory to define the concept of a closed path.
Note that any cycle is a closed simple path based at any of its vertices. However,
a closed simple path based at v may not be a cycle as it may visit any of its vertices
(other than v) more than once. Similarly, a closed path based at v may not be simple
as it may visit v more than once. We illustrate this with the following example.
e1 e3 e5
•u v
A• ] •w •A x
e2 e6 e4
We now use Lemma 2.1.10 to prove the following useful result from [AA1, Lemma
2.3] regarding closed paths.
Lemma 2.2.7. Every closed path (of length greater than zero) can be decomposed
into a unique series of closed simple paths (of length greater than zero); that is, for
every p ∈ CP(v), there exist unique c1 , . . . , cm ∈ CSP(v) (with l(ci ) > 0 for each
i ∈ {1, . . . , m}) such that p = c1 . . . cm .
Proof. Let p = e1 . . . en and let et1 , . . . , etm be the edges in p for which r(eti ) = v,
where t1 < · · · < tm = n. Let c1 = e1 . . . et1 and cj = etj−1 +1 . . . etj for each
1 < j ≤ m. Thus p = c1 . . . cm , where each cj ∈ CSP(v) and l(cj ) > 0.
To show that this decomposition is unique, suppose that p = c1 . . . cr = d1 . . . ds ,
with ci , dj ∈ CSP(v) and l(ci ), l(dj ) > 0. Furthermore, suppose that r ≥ s. By
Lemma 2.1.10 we have c∗1 c1 = v, and so multiplication by c∗1 on the left gives 0 6=
vc2 . . . cm = c∗1 d1 . . . ds . Since the right-hand side is nonzero, we must have c1 = d1 ,
and so by Lemma 2.1.10 again we have c2 . . . cr = d2 . . . ds (noting that vc2 = c2
and vd2 = d2 ). Repeating this process gives ci = di for each i ∈ {1, . . . , s}, and so
p = c1 . . . cs = d1 . . . ds .
CHAPTER 2. LEAVITT PATH ALGEBRAS 54
Note that n(v) is always nonzero, since v ∈ R(v) for each v ∈ E 0 . We apply this
definition in the following lemma from [A, Lemma 4.4.3].
Lemma 2.2.8. Let E be a finite and row-finite graph and let v ∈ E 0 be a sink.
Then
Iv := span({αβ ∗ : α, β ∈ E ∗ , r(α) = v = r(β)})
Now let n = n(v), as defined above. Since E is both finite and row-finite, n must
also be finite. Rename the elements in the set {α ∈ E ∗ : r(α) = v} as {p1 , . . . , pn },
giving Iv = span{pi p∗j : i, j = 1, . . . , n}. Consider the expression (pi p∗j )(pk p∗l ) and
suppose that j 6= k and (pi p∗j )(pk p∗l ) 6= 0. Then, as above, either pj = pk p or
pk = pj q for some paths p, q ∈ E ∗ . In either case, l(p) > 0 or l(q) > 0 since pj 6= pk .
However, this is impossible as v is a sink. Thus j 6= k implies that (pi p∗j )(pk p∗l ) = 0.
Otherwise, we have (pi p∗j )(pj p∗l ) = pi vp∗l = pi p∗l . Thus {pi p∗j : i, j = 1, . . . , n} is a set
of matrix units for Iv and so Iv ∼ = Mn(v) (K).
CHAPTER 2. LEAVITT PATH ALGEBRAS 55
Lemma 2.2.8 leads to the following important result from [AAS, Proposition 3.5].
Lemma 2.2.9. Let E be a finite, row-finite and acyclic graph, and let {v1 , . . . , vt }
be the sinks of E. Then
t
LK (E) ∼
M
= Mn(vi ) (K).
i=1
X
= {αe1 e∗1 β ∗ : e1 ∈ E 1 , s(e1 ) = r(α)}.
Now consider a specific summand of the above expression, αe01 (e01 )∗ β ∗ . Either
r(e01 ) = vi for some i ∈ {1 . . . , t}, in which case αe01 (e01 )∗ β ∗ ∈ Ivi , or r(e1 ) is not
a sink, in which case we can expand the expression by again applying the (CK2)
relation at r(e01 ), giving
X
αe01 (e01 )∗ β ∗ = αe01 {e2 e∗2 : e2 ∈ E 1 , s(e2 ) = r(e01 )} (e01 )∗ β ∗
X
= {αe01 e2 e∗2 (e01 )∗ β ∗ : e2 ∈ E 1 , s(e2 ) = r(e01 )}.
Suppose that repeating the above process yields a sequence of edges e01 e02 . . . that
never reaches a sink, and consider the infinite set of vertices T = {r(e01 ), r(e02 ), . . .}.
Now this set of vertices must be distinct, since if r(e0i ) = r(e0j ) for some r(ei ), r(ej ) ∈
T then we would have a cycle in E, contradicting the fact that E is acyclic.
However, we cannot have an infinite number of distinct vertices since E is finite.
Thus, for each summand of αβ ∗ , we eventually reach an expression of the form
αe01 e02 . . . e0n (e0n )∗ . . . (e01 )∗ (e02 )∗ β ∗ , where r(e0n ) is a sink; that is, r(e0n ) = vi for some
i ∈ {1 . . . , t}. Thus each summand of αβ ∗ is in Ivi for some sink vi , and since
αβ ∗ was an arbitrary monomial and these monomials generate LK (E), we have
LK (E) = ti=1 Ivi .
P
CHAPTER 2. LEAVITT PATH ALGEBRAS 56
To show that this sum is direct, consider two arbitrary monomials αi βi∗ ∈ Ivi
and αj βj∗ ∈ Ivj for i 6= j. Suppose that (αi βi∗ )(αj βj∗ ) 6= 0. As in the proof of
Lemma 2.2.8, this implies that either αj = βi p or βi = αj q for some paths p, q ∈ E ∗ .
Again, this is impossible as αj 6= βi (since vi 6= vj ) and vi , vj are sinks. Thus
(αi βi∗ )(αj βj∗ ) = 0. Since such monomials generate Ivi and Ivj , we have Ivi Ivj =
{0} for all i, j ∈ {1, . . . , t}. Note also that since E is finite, LK (E) is unital (by
Lemma 2.1.12). Since LK (E) = ti=1 Ivi , we have 1 = e1 + · · · + et , where each
P
To illustrate Lemma 2.2.9, consider the finite line graph with t vertices, Mt :
e1 e2 et−1
•v 1 / •v 2 / •v 3 •vt−1 / •v t
Here Mt has a single sink vt with R(vt ) = {vt , pt−1 , . . . , p2 , p1 }, where we define
pi = ei ei+1 . . . et−1 for each i = 1, . . . , t − 1. Thus n(vt ) = t and so Lemma 2.2.9
gives LK (Mt ) ∼ = Mt (K), which agrees with the formulation given in Example 2.1.5.
Definition 2.2.10. Let R be a ring with local units. The ring R is said to be
We now prove the following powerful result from [AMMS1, Proposition 3.1],
which greatly simplifies the proof of several subsequent theorems. Though the orig-
inal theorem was given in a row-finite context, the proof is still valid for arbitrary
graphs. Here we have expanded the proof for ease of understanding.
CHAPTER 2. LEAVITT PATH ALGEBRAS 57
(ii) there exist a vertex w and a cycle without exits c based at w such that
y1 . . . yr xz1 . . . zs is a nonzero element in
( n
)
X
wLK (E)w = ki ci for m, n ∈ N0 and ki ∈ K .
i=−m
Proof. We first show that for a nonzero element x in LK (E), there is a path µ in E
such that xµ is nonzero and in only real edges. Consider a vertex v ∈ E 0 such that
xv 6= 0 (note that such a vertex will always exist, since if x = ni=1 ki pi qi∗ , where
P
Since xv 6= 0 we have v − m ∗
P
i=1 ei ei 6= 0. Since s(ei ) = v for each ei , by the (CK2)
relation there must exist an f ∈ E 1 such that s(f ) = v but f 6= ei for each i. Thus
xvf = ( m ∗
P
i=1 −βei ei + β)f = 0 + βf 6= 0 (since r(β) = v), and so, since β is in only
real edges, we have a path vf ∈ E ∗ such that xvf is nonzero and in only real edges.
Case (2): xvei 6= 0 for some i, say for i = 1. Then xve1 = β1 + βe1 , with the
degree in ghost edges of xve1 strictly less than that of xv. If β1 is a polynomial
in only real edges, then we are done. Otherwise, we can repeat the above process,
reducing the degree in ghost edges with each iteration until we are left with an
element in only real edges (this must happen since the degree in ghost edges of xv
must, of course, be finite).
Now we can assume that x ∈ LK (E) is a nonzero polynomial in only real edges.
Write x = ri=1 ki αi , where 0 6= ki ∈ K for each i and each αi is a real path in
P
CHAPTER 2. LEAVITT PATH ALGEBRAS 58
where v = r(α1 ) and βi = α1∗ αi . Note that deg(βi ) ≤ deg(βi+1 ) and βi 6= βj for
i 6= j.
If βi = 0 for some i then we can apply our inductive hypothesis and we are done.
Furthermore, if some βi does not begin or end in v, then we can apply our inductive
hypothesis to vz or zv (both nonzero since our βi are distinct). Thus we can assume
that each βi is nonzero and begins and ends in v.
Now suppose that there exists some path τ such that τ ∗ βi = 0 for some, but not
all, βi . Then we can apply our inductive hypothesis to τ ∗ z 6= 0 and we are done.
Thus we can suppose that, for a given path τ , if τ ∗ βi 6= 0 for some i, then τ ∗ βi 6= 0
for all i. Let τ = βj for some fixed j. Since τ ∗ βj 6= 0, we must have τ ∗ βj+1 6= 0. Since
deg(βj ) ≤ deg(βj+1 ), by Lemma 2.1.10 we have that either βj = βj+1 or βj+1 = βj rj
for some path rj ∈ CP(v). Since the βi are distinct, we must have the latter case.
Thus in general we have βi+1 = βi ri for some path ri ∈ CP(v), and so we can write
z = k1 v + k2 γ1 + k3 γ1 γ2 + · · · + kr γ1 γ2 . . . γr−1 , where each γi is a closed path based
at v.
Now write each γi as γi = γi1 . . . γin(i) , where each γij is a closed simple path
based at v. If the paths γij are not identical, then we must have γ11 6= γij for some
γij , and so γi∗j γ11 = 0 (since one cannot be an initial subpath of the other). Thus
we have 0 6= γi∗j zγij = k1 v, since γ11 appears in every term but the first.
Now assume that the paths are identical, so that γij = γ (where γ ∈ E ∗ ) for
each i, j. If γ is not a cycle then γ must contain a cycle; that is, if γ = e1 . . . en
CHAPTER 2. LEAVITT PATH ALGEBRAS 59
(with each ei ∈ E 1 ) then there exist ei1 , . . . , eik with i1 , . . . , ik ∈ {1, . . . , n} such that
i1 < · · · < ik and d = ei1 . . . eik is a cycle based at v (noting that k < n). Thus we
have that d∗ γ = 0 (since d is clearly not an initial subpath of γ) and so d∗ zd = k1 v
and we are done.
Thus we can assume that z is a polynomial in the cycle c = γ. Suppose that f
is an exit for c, so that s(e) = s(f ) for some edge e in c but f 6= e. Write c = aeb
(where a, b ∈ E ∗ ) and let ρ = af , which is nonzero since r(a) = s(e) = s(f ). Then
ρ∗ c = f ∗ a∗ aec = f ∗ ec = 0 and so ρ∗ zρ = ρ∗ k1 vρ = k1 r(f ) and we are done.
Pn i
Finally, if c is a cycle with no exits based at v then z ∈ i=−m ki c for m, n ∈
this set is contained in vLK (E)v since each ci begins and ends in v. To see the
converse containment, first note that the elements of vLK (E)v must be linear com-
binations of monomials αβ ∗ , where α, β ∈ E ∗ , s(α) = v = s(β) and r(α) = r(β).
Now, since c has no exits, any path p ∈ E ∗ with s(p) = v must be of the form
cn p0 , where n ≥ 0 and p0 is an initial subpath of c (for if p were to contain an
edge distinct from any edge in c, that edge would constitute an exit for c). Thus
α = cm α0 and β = cn β 0 for some m, n ≥ 0. Since α0 and β 0 are initial subpaths of c
and r(α0 ) = r(β 0 ), we must have α0 = β 0 . Let α0 = e1 . . . ek . For any edge e in c, the
vertex s(e) emits only e (since c has no exits) and so applying the (CK2) relation
at s(e) yields ee∗ = s(e). Thus
and so αβ ∗ = cm α0 (β 0 )∗ c−n = cm vc−n = cm (c∗ )n . Again, using the fact that c has
no exits we can apply the (CK2) relation to give cc∗ = v (letting α0 = β 0 = c in the
above equation). Thus αβ ∗ = cm (c∗ )n = cm−n , and so vLK (E)v is precisely the set
CHAPTER 2. LEAVITT PATH ALGEBRAS 60
of all polynomials in c.
To see that the two cases are not mutually exclusive, consider the graph E
consisting of a single vertex v and a single loop e based at v, and take x = e. Thus
e∗ xv = v (giving case (1)) and vxv = e, a cycle without exits (giving case (2)).
Proposition 2.2.11 leads to the following useful corollary from [AMMS2, Corol-
lary 3.3].
(ii) if E contains no cycles without exits, then every nonzero ideal of LK (E) con-
tains a vertex.
The following two ‘Uniqueness theorems’ are given by Tomforde as [To, Theorem
4.6] and [To, Theorem 6.8], respectively. In Tomforde’s paper, the proofs are fairly
involved. However, in light of Proposition 2.2.11 and its subsequent corollary, the
results follow almost instantly.
Proof. By Lemma 1.1.5, ker(π) is a graded ideal of LK (E). So, by Corollary 2.2.12,
if ker(π) is nonzero it must contain a vertex, contradicting the fact that π(v) 6= 0
for every vertex v ∈ E 0 . Thus ker(π) = {0} and so π is a monomorphism.
Proof. Suppose that ker(π) 6= 0. Since ker(π) is an ideal of LK (E) and E contains
no cycles without exits, Corollary 2.2.12 tells us that ker(π) must contain a vertex.
Thus contradicts the fact that π(v) 6= 0 for every vertex v ∈ E 0 , and so ker(π) = {0}
and thus π is a monomorphism.
In addition to the above two results, Proposition 2.2.11 also leads to the following
useful theorem from [AMMS2, Theorem 3.7]. Recall that an element x in a ring R
is said to be nilpotent if xn = 0 for some n ∈ N.
rem 2.3.9, which describes precisely which graphs yield Leavitt path algebras that
are both simple and purely infinite; that is, ‘purely infinite simple’.
The following result was first shown for row-finite graphs in [AA1, Theorem 3.11]
and then extended to arbitrary graphs in [AA3, Theorem 3.1]. In comparison to the
published versions, the first part of the proof given here is much simpler, thanks to
Proposition 2.2.11.
Theorem 2.3.1. Let E be an arbitrary graph. Then the Leavitt path algebra LK (E)
is simple if and only if E satisfies the following conditions:
Proof. Suppose statements (i) and (ii) are true and let J be a nonzero ideal of
LK (E). Since E contains no cycles without exits, Proposition 2.2.11 tells us that
J contains at least one vertex. Thus the vertices of J form a nonempty, hereditary
saturated subset of E 0 (by Lemma 2.2.1) and so J ∩ E 0 = E 0 , by (i). Thus, by
Lemma 2.2.3, we have that J = LK (E), proving LK (E) is simple.
In other words, F consists of all the vertices of E that are not in H, and all the
edges whose range is not in H. To ensure that F is a well-defined graph, we must
ensure that sF (F 1 ) ∪ rF (F 1 ) ⊆ F 0 . From the definition it is clear that rF (F 1 ) ⊆ F 0 .
Furthermore, suppose that there exists an edge e ∈ F 1 with s(e) ∈ H. Then, by the
hereditary nature of H, we have r(e) ∈ H, which contradicts the definition of F 1 .
Thus s(e) ∈ F 0 , and so sF (F 1 ) ⊆ F 0 and F is therefore well-defined.
v if v ∈
/H e if r(e) ∈
/H e∗ if r(e) ∈
/H
φ(v) = φ(e) = and φ(e∗ ) =
0 if v ∈ H, 0 if r(e) ∈ H 0 if r(e) ∈ H.
First, we check that the (A1) relation holds, i.e. that φ(vi )φ(vj ) = δij φ(vi ) for
all vi , vj ∈ E 0 . We must examine several different cases:
Case 1: vi , vj ∈
/ H. Then φ(vi )φ(vj ) = vi vj = δij vi = δij φ(vi ).
Case 2: vi ∈
/ H, vj ∈ H. Then δij vi = 0 and so φ(vi )φ(vj ) = 0 = δij φ(vi ). A
similar argument holds for vi ∈ H, vj ∈
/ H.
Case 3: vi , vj ∈ H. Then φ(vi )φ(vj ) = 0 = δij φ(vi ).
Next, we check that the (A2) relations hold. First, we check that φ(s(e))φ(e) =
φ(e) for all e ∈ E 1 .
Case 1: r(e) ∈
/ H. Then s(e) ∈
/ H and so φ(s(e))φ(e) = s(e)e = e = φ(e).
Case 2: r(e) ∈ H. Then φ(s(e))φ(e) = 0 = φ(e).
Similar arguments show that φ(e)φ(r(e)) = φ(e), φ(r(e))φ(e∗ ) = φ(e∗ ) and
φ(e∗ )φ(s(e)) = φ(e∗ ) for all e ∈ E 1 .
Next we check that the (CK1) relation holds, i.e. that φ(e∗i )φ(ej ) = δij φ(r(ei ))
for all ei , ej ∈ E 1 .
/ H. Then φ(e∗i )φ(ej ) = e∗i ej = δij r(ei ) = δij φ(r(ei )).
Case 1: r(ei ), r(ej ) ∈
/ H. Then ei 6= ej , so φ(e∗i )φ(ej ) = 0 = δij φ(r(ei )). A
Case 2: r(ei ) ∈ H, r(ej ) ∈
similar argument holds for r(ei ) ∈
/ H, r(ej ) ∈ H.
Case 3: r(ei ), r(ej ) ∈ H. Then again φ(e∗i )φ(ej ) = φ(e∗i )φ(ej ) = 0 = δij φ(r(ei )).
Finally, we check that the (CK2) relation holds, i.e. that φ v− sE (ei )=v ei e∗i = 0
P
Now consider the ideal ker(φ). Since H is nonempty, there must exist a vertex
v ∈ H. Since φ(v) = 0 and v 6= 0, we have ker(φ) 6= {0}. Furthermore, since
H 6= E 0 , there must exist a vertex w ∈ E 0 \H. Since φ(w) = w 6= 0, we have
ker(φ) 6= LK (E). Thus ker(φ) is a proper nontrivial ideal of LK (E), and so LK (E)
is not simple, as required.
To complete the proof, we now suppose that E contains a cycle c without exits,
and show again that this implies that LK (E) cannot be simple. Let v be the base
of this cycle and consider the nonzero ideal hv + ci. We show that hv + ci 6= LK (E)
by showing that v ∈
/ hv + ci. Let c = ei1 . . . eiσ , where s(ei1 ) = r(eiσ ) = v. Since
c has no exits, we have that cc∗ = v (see the proof of Proposition 2.2.11, page 59).
Furthermore, by Lemma 2.1.10 we know that c∗ c = v. Furthermore, we must have
CSP(v) = {c}, since the existence of a closed simple path based at v that is distinct
from c would imply that c has an exit.
Each summand in the above expression must begin and end in v, for otherwise
v = v(v)v = nt=1 kt v(αt (v + c)βt )v = 0, a contradiction. Furthermore, since c is
P
based at v, we can write v + c as v(v + c)v. Thus, since the right-hand side of the
above expression is nonzero, each αt and βt must begin and end in v. Thus each
αt , βt is a monomial in vLK (E)v and so, as shown in the proof of Proposition 2.2.11,
we have that each αt and βt is equal to cm or (c∗ )n for some m, n ∈ N0 .
Now each αt βt term is a power of either c or c∗ , and so we can write v = (v+c)P (c, c∗ ),
where P is a polynomial with coefficients in K, i.e.
Suppose that l−i 6= 0 for some index i > 0, and let m0 be the maximum such index.
Then
(v + c)P (c, c∗ ) = l−m0 (c∗ )m0 + terms of higher degree = v.
Thus we must have that l−m0 = 0, a contradiction. Thus l−i = 0 for all i > 0.
Similarly, we can show that li = 0 for all i > 0. Thus P (c, c∗ ) = l0 v, and so
v = (v + c)l0 v = l0 (v + c), which is impossible since deg(v) = 0 but deg(l0 (v + c)) =
deg(c) > 0. Thus we have obtained our contradiction, proving that LK (E) is simple
and completing the proof.
Example 2.3.2. We now apply Theorem 2.3.1 to some of the Leavitt path algebras
introduced in Section 2.1.
(i) The finite line graph Mn . For every n ∈ N, Mn has no cycles, so trivially con-
dition (ii) of Theorem 2.3.1 is satisfied. Furthermore, suppose that H is a nonempty
hereditary saturated subset of E 0 , so that vi ∈ H for some i = 1, . . . , n. Then, by
the hereditary nature of H, we must have vi+1 , . . . , vn ∈ H. Furthermore, by the
saturated nature of H we must have vi−1 ∈ H, and thus inductively vi−2 , . . . , v1 ∈ H.
CHAPTER 2. LEAVITT PATH ALGEBRAS 66
(ii) The single loop graph R1 . The single loop in R1 forms a cycle without an exit,
so that condition (ii) is not satisfied and thus LK (R1 ) ∼= K[x, x−1 ] is not simple.
(iii) The rose with n leaves Rn . Every edge ei ∈ (Rn )1 is a cycle, and if n ≥ 2
then ei has an exit, since any other edge is an exit. This satisfies condition (ii).
Furthermore, condition (i) is trivially satisfied as Rn only has one vertex, so that
the only nonempty subset of (Rn )0 is (Rn )0 itself. Thus LK (Rn ) ∼
= L(1, n) is simple
for all n ≥ 2.
(iv) The infinite clock graph C∞ . In this case, for any radial vertex vi we have that
{vi } is a hereditary saturated subset of (C∞ )0 , and so LK (C∞ ) ∼
L∞
= i=1 M2 (K)⊕KI22
is not simple.
This next corollary follows directly from Theorem 2.3.1 and Proposition 1.4.10,
and offers an alternative set of conditions that are equivalent to LK (E) being simple.
Corollary 2.3.3. Let E be an arbitrary graph. The Leavitt path algebra LK (E) is
simple if and only if E satisfies the following conditions:
V1 := {v ∈ E 0 : | CSP(v)| = 1}.
The following lemma was first given for row-finite graphs in [AA2, Lemma 7]
and then extended to arbitrary graphs in [AA3, Lemma 4.1].
CHAPTER 2. LEAVITT PATH ALGEBRAS 67
Proof. Suppose that LK (E) is simple, and suppose there exists a v ∈ E 0 such that
CSP (v) = {p}. If p is not a cycle, it is easy to see that there exists a cycle based at v
whose edges are a subset of the edges of p, contradicting the fact that CSP (v) = {p}.
Thus p is a cycle and so, by condition (ii) of Theorem 2.3.1, there must exist an exit
e for p.
Let A be the set of all vertices in p. Now r(e) ∈
/ A, for otherwise we would
have another closed simple path based at v distinct from p. Let X = {r(e)} and let
X̄ be the hereditary saturated closure of X. Recall the definition of Gn (X) from
Lemma 1.4.9. Since LK (E) is simple, by condition (i) of Theorem 2.3.1 we have
X̄ = E 0 , and so we can find an n ∈ N such that
The following useful result regarding infinite emitters in simple Leavitt path
algebras is from [AA3, Lemma 4.2].
Proof. Let z ∈ E 0 be an infinite emitter, and let e ∈ s−1 (z). Since LK (E) is simple,
by Corollary 2.3.3 (iii) we have that r(e) ≥ z. Thus there is a closed simple path p
CHAPTER 2. LEAVITT PATH ALGEBRAS 68
B1 ⊃ B2 ⊃ B3 ⊃ · · ·
The following proposition is from [AA2, Proposition 9]. Though the result is
given there in a row-finite context, the proof still holds for arbitrary graphs.
element of LK (E), where ki ∈ K and pi , qi ∈ E ∗ . Then wαw = j kij pij qi∗j , where
P
s(pij ) = w = s(qij ). Thus pij , qij ∈ LK (H) and so wLK (E)w ⊆ wLK (H)w. Thus
wLK (H)w = wLK (E)w and so wLK (E)w is not purely infinite, as required.
We now come to the main proof of this section. This was first given for row-
finite graphs in [AA2, Theorem 11] and then extended to arbitrary graphs in [AA3,
Theorem 4.3]. It is here that we can apply Theorem 1.3.19, which we presented in
Section 1.3.
2
The proof of this theorem is independent to any results in this section.
CHAPTER 2. LEAVITT PATH ALGEBRAS 70
Theorem 2.3.9. Let E be an arbitrary graph. Then LK (E) is purely infinite simple
if and only if E satisfies the following conditions:
(iii) for every vertex v ∈ E 0 , there is a vertex u ∈ T (v) such that u is the base of a
cycle.
Proof. Suppose that conditions (i), (ii) and (iii) hold. Theorem 2.3.1 tells us im-
mediately that LK (E) is simple. Thus, to show that LK (E) is purely infinite, by
Theorem 1.3.19 it suffices to show that LK (E) is not a division ring and that for any
nonzero pair of elements x, y ∈ LK (E) there exist s, t ∈ LK (E) such that sxt = y.
Together, conditions (ii) and (iii) show there exists at least one cycle with an exit
in E, and thus there must exist two distinct edges e1 and e2 in E 1 . Since e∗1 e2 = 0,
LK (E) has zero divisors and therefore cannot be a division ring.
Now let x, y be a pair of nonzero elements in LK (E). Since E contains no cycles
without exits, by applying Proposition 2.2.11 we can find elements a, b ∈ LK (E)
such that axb = u, where u ∈ E 0 . By condition (iii), u connects to some vertex v
at the base of a cycle c. Thus either u = v or there is a path p ∈ E ∗ with s(p) = u
and r(p) = v. By choosing a0 = b0 = u in the former case, or a0 = p∗ , b0 = p in the
latter, we have elements a0 , b0 ∈ LK (E) such that a0 ub0 = v.
Since c is a closed simple path based at v and LK (E) is simple, Lemma 2.3.4
tells us there must be at least one other closed simple path q based at v with q 6= c.
For each m ∈ N, let dm = cm−1 q. Since c cannot be a subpath of q, and vice versa,
we have c∗ q = 0 = q ∗ c. Using that c∗ c = v and assuming that m > n, we have
d∗m dn = (q ∗ (c∗ )m−1 )(cn−1 q) = q ∗ (c∗ )m−n q = 0. Similarly, d∗m dn = 0 for n > m. For
the case m = n, we have d∗m dn = q ∗ vq = v. Thus d∗m dn = δm,n v for all m, n ∈ N.
Since LK (E) is simple, we have hvi = LK (E), and so for an arbitrary w ∈ E 0
Pt Pt ∗
we can write w = i=1 ai vbi for some ai , bi ∈ LK (E). Let aw = i=1 ai di and
CHAPTER 2. LEAVITT PATH ALGEBRAS 71
Pt
bw = j=1 dj bj . Using the fact that d∗i dj = δi,j v, we have
t
! t
! t
X X X
aw vbw = ai d∗i v dj bj = ai vbi = w.
i=1 j=1 i=1
In other words, for any vertex w ∈ E 0 , we can find aw , bw ∈ LK (E) for which
aw vbw = w.
By Lemma 2.1.12, we can find a finite subset of vertices X = {v1 , . . . , vs } for
which e = si=1 vs is a local unit for y, so that ey = e = ye. Let avi , bvi be elements
P
for which avi vbvi = vi , for each vi ∈ X. Let s0 = si=1 avi d∗i and t0 = sj=1 dj bvj .
P P
This gives
s
! s
! s s
X X X X
0 0
s vt = avi d∗i v dj bvj = avi vbvi = vs = e.
i=1 j=1 i=1 i=1
Conversely, suppose that LK (E) is purely infinite simple. Again, conditions (i)
and (ii) follow directly from the fact that LK (E) is simple (by Theorem 2.3.1). If
condition (iii) does not hold, then there exists a vertex w ∈ E 0 such that no vertex
v ∈ T (w) is the base of a cycle. Since a cycle can be formed from a subset of edges
of any closed path, there cannot be any closed simple path based at any vertex v ∈
T (w) either. Thus, by Proposition 2.3.8, wLK (E)w is not purely infinite. Finally,
Proposition 1.3.18 gives that LK (E) is not purely infinite, a contradiction.
The following proposition from [AA3, Theorem 4.4] shows that, for any graph E
for which LK (E) is simple, we have the following dichotomy.
Proof. If E is acyclic then Theorem 4.2.3 tells us that LK (E) is locally matricial.
Otherwise, suppose E contains a cycle c. By Corollary 2.3.3 we have that E is
cofinal, and so every vertex connects to the infinite path c∞ . Thus every vertex
connects to a cycle, satisfying condition (iii) of Theorem 2.3.9. Since LK (E) is
simple, conditions (i) and (ii) of Theorem 2.3.9 are satisfied (by Theorem 2.3.1),
and thus LK (E) is purely infinite simple.
(i) The finite line graph Mn . Since Mn is acyclic for all n ∈ N, LK (Mn ) must
be locally matricial for all n ∈ N. This is no surprise, considering that LK (Mn ) ∼
=
Mn (K).
(ii) The rose with n leaves Rn . Since Rn contains n cycles for each n ∈ N,
LK (Rn ) ∼
= L(1, n) must be purely infinite simple for all n ≥ 2.
2.4 Desingularisation
Recall that a vertex v ∈ E 0 is said to be singular if v is either a sink or an infinite
emitter. In this section we look at the process of ‘desingularisation’, in which we
construct from a given graph E a new graph that contains no singular vertices; in
other words, a graph that is row-finite and has no sinks. This concept was originally
used in the C ∗ -algebra context in [BPRS]. The significance of the desingularisa-
tion process is illustrated in Theorem 2.4.5, in which we show that the Leavitt
path algebra of a graph E is Morita equivalent to the Leavitt path algebra of its
desingularisation.
•v 0 / •v 1 / •v2 / •v3 /
f1 f2 f3
•v 0 / •v 1 / •v2 / •v3 /
We then remove the edges in s−1 (v0 ) and add an edge gj from vj−1 (in the infinite
line graph) to r(ej ) for each ej ∈ s−1 (v0 ). Effectively, we are removing each ej
and replacing it with the path f1 f2 . . . fj−1 gj of length j. Note that both ej and
f1 f2 . . . fj−1 gj have source v0 and range r(ej ).
Note also that the desingularisation of a graph may not necessarily be unique:
differences may arise depending on the way in which we choose to order the edges
in s−1 (v0 ) (in the case that v0 is an infinite emitter).
(∞)
E∞ : •u / •v
Note that u is an infinite emitter and v is a sink, so we add a tail at both vertices
in the desingularisation process. Furthermore, each edge emitted by u has range v,
and so we obtain the desingularisation
•u / •u 1 / u2 / u3 /
{{ mmmm • hhhhhhh •
m h
{{ mmm hhhh
{{{mmmhmhmhhhhh
}{shvmmhmhhh
•v / •v 1 / •v2 / •v 3 /
CHAPTER 2. LEAVITT PATH ALGEBRAS 74
` •O v1 • v2
w;
www
w
ww
ww
C∞ : •u GG / •v3
GG
(∞) GG
GG
G#
•v4
Again, each vertex in this graph is a singularity, resulting in an infinite number of
infinite tails. Thus the desingularisation of C∞ looks like
•u / •u 1 / •u2 / •u 3 /
•v1 •v 2 •v3 •v 4
•v11 •v 21 •v31 •v 41
arises when s(ei ) = s(ej ) = v0 , where v0 is an infinite emitter. In the case that
i = j, we have φ(e∗i )φ(ei ) = (gi∗ fi−1
∗
. . . f2∗ f1∗ )(f1 f2 . . . fi−1 gi ) = r(ei ) = φ(r(ei )). On
the other hand, if i 6= j then φ(e∗i )φ(ej ) = (gi∗ fi−1
∗
. . . f2∗ f1∗ )(f1 f2 . . . fj−1 gj ) = 0, since
f1 f2 . . . fi−1 gi and f1 f2 . . . fj−1 gj are not subpaths of each other. Thus the (CK1)
relation is preserved. Finally, since regular vertices (and the edges they emit) are un-
changed by φ, and we only evaluate the (CK2) relation at regular vertices, it is clear
that the (CK2) relation is preserved. Thus φ is a well-defined K-homomorphism, as
required.
Finally, we show that φ is a monomorphism. Suppose that x ∈ ker(φ) and x 6= 0.
By Proposition 2.2.11 there exist y, z ∈ LK (E) for which either yxz = v ∈ E 0 or
yxz = ni=−m ki ci 6= 0, where m, n ∈ N0 , ki ∈ K and c is a cycle without exits in
P
paths of length t to paths of length greater than or equal to t, and that c and φ(c)
must have the same source and range. Furthermore, φ(c) cannot pass through any
vertex more than once (from the definition of φ) and so φ(c) is a cycle in F . Since
LK (F ) is graded, this implies that each term ki φ(c)i = 0, and thus each ki = 0, which
is impossible since ni=−m ki ci 6= 0. Thus ker(φ) = {0} and so φ is a monomorphism,
P
as required.
Proposition 2.4.4 leads to the following powerful result from [AA3, Theorem 5.2].
Here we have greatly expanded the proof to clarify the arguments and results used
at each step.
LK (E) there exists a tk such that tk x = x = xtk for all x ∈ X (see the proof of
Lemma 2.1.12), and so {tk : k ∈ N} forms a set of local units for LK (E). (Note
CHAPTER 2. LEAVITT PATH ALGEBRAS 76
that if E 0 is finite then tk is simply the identity for LK (E) for all k ≥ |E 0 |, by
Lemma 2.1.12.)
r(pi ) = r(qi ) and s(pi ), s(qi ) ∈ {vl : l ≤ k} for each i ∈ {1, . . . , n}. Suppose that p is
a path in F with s(p) ∈ {vl : l ≤ k}. If p = p1 . . . pn , where each pi is an edge from
the original graph E, then p = φ(p1 . . . pn ). If p = f1 f2 . . . fj−1 gj (where the fi and
gj are as defined in Definition 2.4.1), then p = φ(ej ), where ej ∈ s−1 (v0 ) for some
infinite emitter v0 ∈ E 0 . Furthermore, if p is a concatenation of two such paths,
then clearly p ∈ Im(φ). For all three cases above, clearly we also have p∗ ∈ Im(φ).
If v0 is a sink, then each vi along the infinite tail based at v0 emits precisely one
∗
edge, namely fi+1 . Thus applying the (CK2) relation at vi gives vi = fi+1 fi+1 , and
CHAPTER 2. LEAVITT PATH ALGEBRAS 77
so
∗
(f1 . . . fj−1 fj )(fj∗ fj−1 ∗
. . . f1∗ ) = f1 . . . fj−1 vj−1 fj−1 . . . f1∗
∗
= f1 . . . fj−1 (fj−1 . . . f1∗
= f1 . . .)vj−2 . . . f1∗
..
.
= f1 f1∗
= v0 ,
and we are done. (Note that the inverse images of all of these paths also have
source in the set {vl : l ≤ k}, and thus are indeed contained in tLK (E)t.) Therefore
φ|tL (E)t is an isomorphism of K-algebras and so tLK (E)t ∼
K = tLK (F )t.
From the definition of tk , we can view tk LK (E)tk as the set of all elements in
LK (E) generated by paths p with s(p) ∈ {vl : l ≤ k}. Thus we have tk LK (E)tk ⊆
tk+1 LK (E)tk+1 (and tk LK (F )tk ⊆ tk+1 LK (F )tk+1 ) for each k ∈ N. For every pair
CHAPTER 2. LEAVITT PATH ALGEBRAS 78
i, j ∈ N with i ≤ j, let ϕij be the inclusion map from ti LK (E)ti to tj LK (E)tj and
let ϕ̄ij be the inclusion map from ti LK (F )ti to tj LK (F )tj . For such a pair i, j it
is easy to see that tj ti = ti = ti tj , and so for any x = ti xti ∈ ti LK (E)ti we have
tj xtj = tj (ti xti )tj = ti xti = x. Thus we can view the inclusion map ϕij as mapping
ti xti 7→ tj xtj (and similarly for ϕ̄ij ). For ease of notation, let φ|tk LK (E)tk = φk for all
k ∈ N. Thus for any x = ti xti ∈ ti LK (E)ti we have
ϕ̄ij φi (ti xti ) = ϕ̄ij (ti φi (x)ti ) = tj φi (x)tj = φj (tj xtj ) = φj ϕij (ti xti ),
φi
ti LK (E)ti / ti LK (F )ti
ϕij ϕ̄ij
tj LK (E)tj / tj LK (E)tj
φj
Clearly (ti LK (E)ti , ϕij )N and (ti LK (F )ti , ϕ̄ij )N are direct systems of rings. Since
these are both ascending chains of rings, the direct limits − lim
→ i∈N ti LK (E)ti and
lim
−→ i∈N ti LK (F )ti exist (see Appendix A). For ease of notation, we set
RE = lim
−→ i∈N ti LK (E)ti and RF = lim
−→ i∈N ti LK (F )ti .
For each i ∈ N, let ϕi be the map from ti LK (E)ti to RE and let ϕ̄i be the map from
ti LK (F )ti to RF as defined in Definition A.1.1.
(ϕ̄j φj )ϕij = ϕ̄j (φj ϕij ) = ϕ̄j (ϕ̄ij φi ) = (ϕ̄j ϕ̄ij )φi = ϕ̄i φi .
Thus, by condition (ii) of Definition A.1.1, there exists a unique ring homomorphism
µ : RE → RF for which ϕ̄i φi = µϕi for all i ∈ N. By a similar argument, there exists
CHAPTER 2. LEAVITT PATH ALGEBRAS 79
ti LK (F )ti / tj LK (F )tj
ϕ̄ij
From the second equation we have ϕi = µ0 ϕ̄i φi = µ0 µϕi (substituting from the first
equation) for all i ∈ N, and so, by appealing to the uniqueness given in Defini-
tion A.1.1 (ii), we have µ0 µ = 1RE . Similarly, the first equation gives ϕ̄i = µϕi φ−1
i =
since Awi is a direct summand of LK (F )w0 for each singular vertex w0 ∈ E 0 . From
the above equation, we have an isomorphism between a subset of H ⊕H and LK (F ),
which implies we have an epimorphism from H ⊕ H to LK (F ). Since LK (F ) is a
generator for LK (F )-Mod, for any M ∈ LK (F )-Mod there exists an index set I
and epimorphism τ : LK (F )(I) → M . This induces an epimorphism η : H (2I) =
H (I) ⊕ H (I) → M , and so H is a generator for LK (F )-Mod.
L
Now, note that we have LK (F )tk = LK (F )(v1 + · · · + vk ) = {vi :i≤k} LK (F )vi
L
for each k ∈ N. Thus is it easy to see that lim
−→ k∈N LK (F )tk = v∈E 0 LK (F )v = H.
Note that each LK (F )tk is projective (by Proposition 1.2.13), is finitely generated
(with generating set {tk }) and is a direct summand of H. Thus H is a locally projec-
tive generator for LK (F )-Mod (see Definition 1.3.14) and so by Proposition 1.3.15
any ring that is isomorphic to −
lim
→ k∈N End(LK (F )tk ) must be Morita equivalent to
LK (F ).
lim ∼ ∼
−→ k∈N End(LK (F )tk ) = lim
−→ k∈N tk LK (F )tk = LK (E).
In this chapter we define the notion of a socle and give a precise description of the
socle of an arbitrary Leavitt path algebra in Section 3.2. Furthermore, we expand
this definition to a socle series in Section 3.4, and again describe the socle series
of a Leavitt path algebra, applying the concept of a quotient graph introduced in
Section 3.3. To begin, we introduce some preliminary ring-theoretic definitions and
results.
It is clear from the definition that socl (R) is a left ideal of R. However, what is
slightly less obvious is that it is also a right ideal of R, as the following proposition
shows.
81
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 82
Proof. Since socl (R) is clearly a left ideal of R (since it is the sum of left ideals), it
suffices to show that socl (R) is also a right ideal of R. Take an arbitrary nonzero
element s ∈ socl (R) and an arbitrary nonzero r ∈ R. Since s ∈ socl (R), we can
write s = l1 + . . . + ln , where each li ∈ Li and Li is a minimal left ideal of R. Thus
sr = l1 r + . . . + ln r, and so it suffices to show that li r ∈ socl (R) for each i.
Take an arbitrary minimal left ideal Li of R and define φ : Li → R by φ(x) = xr,
for all x ∈ Li . It is easy to see that φ is an R-module homomorphism: clearly φ is
additive, and for any r0 ∈ R and x ∈ Li we have φ(r0 x) = (r0 x)r = r0 (xr) = r0 φ(x).
Since ker(φ) is a left ideal contained in Li and Li is minimal, then either ker(φ) =
Li or ker(φ) = {0}. In the former case, this gives φ(Li ) = {0}. In the latter case,
φ is a monomorphism, and so φ : Li → φ(Li ) is an isomorphism of left R-modules.
Specifically, φ(Li ) is a minimal left ideal of R. In either case, φ(Li ) ⊆ socl (R), and
thus xr ∈ socl (R) for every x ∈ Li . In particular, li r ∈ socl (R) and we are done.
Then any element of J 3 must be a sum of elements of the form xay, where x, y ∈ R,
and so J 3 ⊆ RaR and thus J 3 = 0. Since R is semiprime, we have that J = 0 and
so a = 0, since a ∈ J. Thus R is nondegenerate.
Proof. Clearly if R contains no nonzero left (or right) nilpotent ideals then it contains
no nonzero two-sided nilpotent ideals and must therefore be semiprime. To prove
the converse, suppose that R is semiprime and let I be a nonzero left ideal of R
such that I n = 0 for some n ∈ N. As in the proof of Proposition 3.1.3, we can find
a left ideal J such that J is nonzero and J 2 = 0. Take an arbitrary element nonzero
x ∈ J and let L be the two-sided ideal generated by x, so that
( )
X X X
L= ri xsi + rj0 x + xs0k + mx : ri , si , rj0 , s0k ∈ R, m ∈ Z .
i j k
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 84
L6 ⊆ (RxR)(RxR) ⊆ RxRxR ⊆ J 2 R = 0.
We now move on to describing the general form of minimal left ideals. The
following proposition is from [J2, Proposition 3.9.1].
Corollary 3.1.6. Every minimal left ideal of a semiprime ring R is of the form Re,
where e is an idempotent in R.
We can show similarly that every minimal right ideal of a semiprime ring R is
of the form eR, where e is an idempotent. Note that the converse is not necessarily
true: for a given idempotent e in a semiprime ring R, Re and eR may not be minimal
left or right ideals. However, the following proposition from [L1, Lemma 1.19] shows
that if one of these is minimal then both are.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 85
Proposition 3.1.8. Let R be a ring. If R is semiprime, then socl (R) = socr (R).
Proof. Since R is semiprime, Corollary 3.1.6 tells us that that the left socle of R
is the sum of minimal left ideals of the form Re, where e is an idempotent in
R. Furthermore, by Proposition 3.1.7 we know that Re is a minimal left ideal if
P P
and only eR is a minimal right ideal. Thus, if socl (R) = i Rei , then i ei R ⊆
socr (R). Therefore each ei ∈ ei R ⊆ socr (R) and so, since socr (R) is a two-sided
P
ideal, socl (R) = i Rei ⊆ socr (R). Using a similar argument, we also have that
socr (R) ⊆ socl (R), and so socr (R) = socl (R).
the line points of E. We begin with the following proposition, shown in [AMMS2,
Proposition 3.4].
Proposition 3.2.1. For an arbitrary graph E, the Leavitt path algebra LK (E) is
semiprime.
Proof. Suppose that LK (E) is not semiprime, so that there exists a nonzero ideal
I such that I 2 = 0. Take a nonzero x ∈ I. By Proposition 2.2.11, there exist
y, z ∈ LK (E) such that either yxz = kv for some nonzero k ∈ K and some v ∈ E 0 ,
or yxz = ni=−m ki ci for some ki ∈ K (not all zero) and some c ∈ E ∗ , where c is a
P
Proposition 3.1.8 and Proposition 3.2.1 lead immediately to the following corol-
lary.
Corollary 3.2.2. Let E be an arbitrary graph. Then socl (LK (E)) = socr (LK (E)).
In light of this result, we will drop the terms ‘left’ and ‘right’ and simply refer
to the ‘socle’ of a Leavitt path algebra LK (E), which we denote by soc(LK (E)).
Recall that a vertex is a bifurcation if it emits two or more edges, and that a
vertex v is a line point if there are no bifurcations or cycles based at any vertex
w ∈ T (v). We say that a path p contains a bifurcation if the set p0 \{r(p)}
contains a bifurcation. The following related lemma is from [AMMS1, Lemma 2.2],
and though it is given there in a row-finite context, the proof remains valid for the
arbitrary case.
Proof. Let p be the unique path for which s(p) = u and r(p) = v. By Lemma 2.1.10
we have that p∗ p = v. Furthermore, since p contains no bifurcations, for each edge
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 87
ei in p we have s(ei ) = ei e∗i (by the (CK2) relation). Using the same logic as in the
proof of Proposition 2.2.11, page 59, this gives pp∗ = u.
Define a map φp : LK (E)u → LK (E)v by φp (x) = xp. Similarly, define a map
φp∗ : LK (E)v → LK (E)u by φp∗ (y) = yp∗ . These maps are easily seen to be left
LK (E)-module homomorphisms. Furthermore, we have φp∗ φp (x) = xpp∗ = xu = x
and φp φp∗ (y) = yp∗ p = yv = y. Thus φp and φp∗ are mutual inverses, and so
LK (E)u ∼
= LK (E)v as left LK (E)-modules, as required.
We now embark on a series of results concerning left ideals and minimal left ideals
of a Leavitt path algebra LK (E), building towards our main result in Theorem 3.2.11.
The following proposition is from [AMMS1, Proposition 2.3], and though it is given
in a row-finite context, it is easily adapted to the arbitrary case by requiring that u
is a regular vertex rather than simply ‘not a sink’.
Pn ∗
Proof. By the (CK2) relation, we know that u = i=1 fi fi , and so LK (E)u =
Pn ∗ ∗
i=1 LK (E)fi fi . To show that this sum is direct, note that the fi fi are orthogonal
idempotents by the (CK1) relation: (fi fi∗ )(fi fi∗ ) = fi (fi∗ fi )fi∗ = fi r(fi )fi∗ = fi fi∗ ,
while (fi fi∗ )(fj fj∗ ) = fi (fi∗ fj )fj∗ = 0 for i 6= j. Thus, if xi fi fi∗ = nj=1,j6=i xj fj fj∗ for
P
Now consider an arbitrary element y = ni=1 yi ∈ ni=1 LK (E)vi . Then ni=1 yi fi∗ ∈
P L P
LK (E)u and
P P
n
= ni=1 φ(yi fi∗ ) = ni=1
∗
P Pn ∗
Pn ∗
φ y f
i=1 i i j=1 yi fi fj = i=1 (yi fi fi ) = y,
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 88
The following proposition from [AMMS2, Lemma 4.3] considers the case in which
u is an infinite emitter.
LK (E)u.
mutually orthogonal idempotents (by the (CK1) relation). Now, since r(fi∗ ) = u
for each i, we have the inclusion i∈I LK (E)fi fi∗ ⊆ LK (E)u. Suppose the converse
L
u = j xj fj fj∗ , where {fj } is a finite subset of s−1 (u) and each xj ∈ LK (E). Since
P
u is an infinite emitter, there exists a g ∈ s−1 (u) such that g 6= fj for each j.
∗
P
Thus g = ug = j xj fj fj g = 0 by the (CK1) relation, a contradiction. Thus
∗
L
i∈I LK (E)fi fi is properly contained in LK (E)u, as required.
From Corollary 3.2.6 we can begin to see a relationship between minimal left
ideals and line points forming. The following proposition from [AMMS1, Corollary
2.4] reinforces this notion. Though their proof is given in a row-finite setting, it
holds for arbitrary graphs as well.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 89
Proof. Let µ be a closed path based at u and suppose that LK (E)u is a minimal
left ideal. By Corollary 3.2.6, there cannot be a bifurcation at any vertex in T (u).
In particular, µ cannot contain any bifurcations and so must be a cycle without
exits. Consider the left ideal LK (E)(u + µ). This ideal is nonempty, since u + µ =
u(u + µ) ∈ LK (E)(u + µ). Furthermore, it is contained in LK (E)u since r(µ) = u.
Thus, by the minimality of LK (E)u, we have LK (E)(u + µ) = LK (E)u. Specifically,
we have u ∈ LK (E)(u + µ).
Thus we can write u = ni=1 ki αi (u + µ), where the αi are monomials in LK (E)
P
and ki ∈ K. Using a similar argument to the one found in the proof of Proposi-
tion 2.2.11, each αi must begin and end in u and is therefore either a power of µ or
µ∗ (since µ is a cycle without exits). Thus we can write u = P (µ, µ∗ )(u + µ), where
P is a polynomial with coefficients in K; that is,
where each li ∈ K and m, n ∈ N. Using the same argument found in the proof of
Theorem 2.3.1, we can deduce that l−i = 0 = li for all i > 0. Thus u = l0 u(u + µ) =
l0 (u + µ), which is impossible, and so LK (E) cannot be minimal.
The following proposition was first given in [AMMS1, Theorem 2.9] and then
generalised to the arbitrary case in [AMMS2, Theorem 4.12]. However, a far simpler
proof is given in [ARM1, Proposition 1.9], and it is this proof that we present below.
Proof. Suppose that v is a line point in E. We begin by showing that every nonzero
LK (E)-endomorphism of LK (E)v is an automorphism. By Lemma 1.2.2 we have
that End(LK (E)v) ∼
= (vLK (E)v)Op . Take an arbitrary element x ∈ (vLK (E)v)Op .
Then x = v( ni=1 ki pi qi∗ )v = ni=1 ki (vpi qi∗ v), where each pi , qi ∈ E ∗ and n ∈ N.
P P
If vpi qi∗ v 6= 0 for some i ∈ {1, . . . , n}, then s(pi ) = s(qi ) = v and r(pi ) = r(qi ).
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 90
Thus pi and qi are both paths from v to r(pi ). Since v is a line point there can only
be one such path and so pi = qi . Furthermore, since pi contains no bifurcations
we have vpi qi∗ v = v (see the proof of Lemma 3.2.3). Thus x = ( ki )v and so
P
End(LK (E)v) ∼ = (vLK (E)v)Op = Kv. Since Kv is a field with identity element v,
every nonzero element of End(LK (E)v) is invertible and thus is an automorphism.
Now let a be an arbitrary nonzero element in LK (E)v. Since LK (E) has local
units, LK (E)a 6= 0. Furthermore, since LK (E) is semiprime we have (LK (E)a)2 6= 0,
and so there exist b, c ∈ LK (E) such that (ca)(ba) 6= 0. Define φ : LK (E)v →
LK (E)v by φ(x) = x(ba). Then φ(a) = aba 6= 0 and so φ is a nonzero endomorphism,
and therefore an automorphism. Thus, since v ∈ LK (E)v, we must have v = d(ba)
for some d ∈ LK (E). Therefore v ∈ LK (E)a and so, by Lemma 1.1.6, LK (E)v is
minimal.
Proposition 3.2.8 leads to the following lemma from [AMMS1, Proposition 4.1]
Proof. By Proposition 3.2.8, we know that LK (E)u is a minimal left ideal for any
vertex u ∈ Pl (E) and is therefore contained in the socle. To show that the converse
containment is not true, we give the following counterexample. Let E be the graph
f
•v o / •w
e
•z
So far we have shown that any principal left ideal of LK (E) generated by a line
point u is contained in the socle of LK (E), but we have not quite given a precise
formulation of the socle. The following theorem, from [AMMS1, Theorem 3.4],
brings us one step closer to doing so. Though the original proof is given for the
row-finite case, it is easily generalised to the arbitrary case by applying the relevant
generalised results.
Proof. Consider x ∈ LK (E). By Proposition 2.2.11 we have two cases; we show that
the second case is not possible.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 92
Suppose that there exist elements y, z ∈ LK (E) such that yxz is a nonzero
element in ( )
n
X
wLK (E)w = ki ci for m, n ∈ N and ki ∈ K ,
i=−m
We now show that (wLK (E)w)λ is a minimal left ideal in the subring wLK (E)w.
By Lemma 1.1.6 it suffices to show that, for any nonzero a ∈ (wLK (E)w)λ, we
have λ ∈ (wLK (E)w)a. Since a ∈ LK (E)λ and LK (E)λ is minimal in LK (E), we
have LK (E)a = LK (E)λ, and so λ ∈ LK (E)a. Therefore λ = wλ ∈ wLK (E)a =
(wLK (E)w)a, as required.
It is straightforward to see that the function φ : wLK (E)w → K[t, t−1 ] given
by φ(w) = 1, φ(c) = t and φ(c∗ ) = t−1 (and expanded linearly) is an isomorphism.
This implies that φ((wLK (E)w)λ) is minimal in K[t, t−1 ]. However, K[t, t−1 ] has no
minimal left ideals. To see this, suppose that f (t) = li=k ai ti and g(t) = nj=m bj tj
P P
are two nonzero elements of R = K[t, t−1 ]. Without loss of generality, we can
suppose that ak 6= 0 and bm 6= 0, so that f (t)g(t) = ak bm tk+m + higher powers 6= 0.
Thus R is an integral domain. Now suppose that R contains a minimal left ideal I
and let x be a nonzero element of I. Since x2 ∈ I and I is minimal, I = Rx2 , and so
x = yx2 for some y ∈ R. Since R is an integral domain, this gives 1 = yx ∈ I and
so I = R. Thus R is a field. However, this is a contradiction, since it is easy to see
that not all elements in R have an inverse (for example, 1 + t). Thus K[t, t−1 ] has
no minimal left ideals, and so the second case of Proposition 2.2.11 is not possible,
as claimed.
Therefore we must be in the first case of Proposition 2.2.11, and so there exist
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 93
Now we come to the main result of this section, where we describe precisely the
structure of the socle of a Leavitt path algebra.
Theorem 3.2.11. Let E be an arbitrary graph. Then soc(LK (E)) = I(Pl (E)) =
I(H), where H is the hereditary saturated closure of Pl (E).
Proof. First, we show that soc(LK (E)) ⊆ I(Pl (E)). Let I be a minimal left ideal
of LK (E). Since LK (E) is semiprime, by Corollary 3.1.6 there exists an idempotent
α ∈ LK (E) such that I = LK (E)α. Furthermore, by Theorem 3.2.10 we have
LK (E)α ∼
= LK (E)u for some u ∈ Pl (E). Thus there exists a left LK (E)-module
isomorphism φ : LK (E)α → LK (E)u and we can find elements x, y ∈ LK (E) such
that φ(α) = xu and φ−1 (u) = yα, giving
Thus α = x(u)yα ∈ I(Pl (E)), and so I = LK (E)α ⊆ I(Pl (E)) and therefore
soc(LK (E)) ⊆ I(Pl (E)).
Corollary 3.2.12. For an arbitrary graph E, the Leavitt path algebra LK (E) has
nonzero socle if and only if Pl (E) 6= ∅.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 94
Example 3.2.13. We now use Theorem 3.2.11 to compute the socle of some familiar
Leavitt path algebras.
(i) The finite line graph Mn . Every vertex in Mn is a line point, and so by
Theorem 3.2.11 we have soc(LK (Mn )) = I(Pl (Mn )) = I((Mn )0 ) = LK (Mn ). Thus,
since LK (Mn ) ∼
= Mn (K), we also have that soc(Mn (K)) = Mn (K) for all n ∈ N.
(ii) The rose with n leaves Rn . The graph Rn contains a single vertex v that is
the base of n cycles; in particular, v is not a line point. Thus Pl (Rn ) = ∅ and so
soc(Rn ) = 0. Thus, since LK (Rn ) ∼
= L(1, n), we also have that soc(L(1, n)) = 0 for
all n ∈ N.
(iii) The infinite clock graph C∞ . In this case, the line points of C∞ are the radial
vertices vi , so that Pl (C∞ ) = {vi }∞ ∞
i=1 . Thus we have soc(LK (C∞ )) = I({vi }i=1 ).
L∞
Recall from Example 2.1.7 the isomorphism φ : LK (C∞ ) → i=1 M2 (K) ⊕ KI22
L∞
that maps each vertex vi to (E11 )i , the element of i=1 M2 (K) with E11 in the ith
component and zeros elsewhere. Thus soc( ∞
L
i=1 M2 (K) ⊕ KI22 ) is the two-sided
(Emn )j , since (Emn )j = (Em1 )j (E11 )j (E1n )j , and since such matrix units generate
L∞
i=1 M2 (K) we have
∞
! ∞
M M
soc M2 (K) ⊕ KI22 = M2 (K).
i=1 i=1
Many of the results in this section are thanks to Tomforde, whose paper [To]
gives many valuable results regarding the ideal structure of a Leavitt path algebra.
We begin with the following definitions.
BH = {v ∈ E 0 \H : v is an infinite emitter and 0 < |s−1 (v) ∩ r−1 (E 0 \H)| < ∞}.
In other words, a breaking vertex is an infinite emitter that emits an infinite number
of edges into H, while emitting only a finite number of edges into the rest of the
graph. Note that if E is row-finite then BH is always empty.
Definition 3.3.2. Let E be an arbitrary graph and let (H, S) be an admissible pair
0
of E. The quotient graph E\(H, S) is defined as follows. Let BH be a set of
0
duplicates of BH , and write BH = {v 0 : v ∈ BH }. Let S 0 = {v 0 ∈ BH
0
: v ∈ S}. We
define
0
(E\(H, S))0 = (E 0 \H) ∪ (BH \S 0 ) and
Furthermore, the source and range functions sE\(H,S) and rE\(H,S) coincide with sE
/ H}, while we define sE\(H,S) (e0 ) = sE (e)
and rE when applied to {e ∈ E 1 : r(e) ∈
and rE\(H,S) (e0 ) = (rE (e))0 . If S = ∅, we often write E\(H, S) as simply E|H.
Thus, to form the quotient graph E\(H, S) we first remove all vertices u ∈ H
and all edges e ∈ E 1 with r(e) ∈ H. Then, for each breaking vertex v ∈ BH \S, we
add a new vertex v 0 to the graph. Furthermore, for each edge e with r(e) = v, we
add a new edge e0 to the graph, running from s(e) to v 0 . Note that this construction
implies that every v 0 ∈ BH
0
\S 0 is a sink.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 96
e2
(
•v14 h • 4 v2
44
44
44 e1
44
(∞)
44
44 (∞)
44
44
44
44
44
44
•u 1 o •u2 / •u3
e02 e01
•v20 •v10
Note that, by the definition of a breaking vertex, this sum must be finite and is
therefore well-defined. Using the fact that ei e∗i ej e∗j = δij ei e∗i (by the (CK1) relation),
it is easy to see that v H is an idempotent.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 97
Definition 3.3.5. Let E be an arbitrary graph. For any admissible pair (H, S) of E,
we denote by I(H,S) the two-sided ideal in LK (E) generated by the sets {u : u ∈ H}
and {v H : v ∈ S}. Note that if S is empty then I(H,S) = I(H).
The following proposition is from [To, Lemma 5.6] and describes the structure
of an ideal of the form I(H,S) . Here we have greatly expanded the proof for clarity.
Proposition 3.3.6. Let E be an arbitrary graph. For any admissible pair (H, S) of
E we have
Proof. Let J denote the right-hand side of the above equation. It is clear that every
element in J is in the ideal generated by {v : v ∈ H} ∪ {wH : w ∈ S}, that is,
J ⊆ I(H,S) . To show the converse containment, let x ∈ I(H,S) , so that
X X
x= ai v i b i + cj wjH dj ,
i j
where each ai , bi , cj , dj ∈ LK (E), each vi ∈ H, each wj ∈ S and the sums are finite.
By Lemma 2.1.8 we know that every element in LK (E) is of the form i ki pi qi∗ ,
P
where each pi , qi ∈ E ∗ and each ki ∈ K. Thus, omitting the scalars ki for ease of
notation, we can write the above expression as
X X
x= (p1i q1∗i )vi (p2i q2∗i ) + (p1j q1∗j )wjH (p2j q2∗j ),
i j
where each p, q ∈ E ∗ .
Take a nonzero term y = (p1 q1∗ )v(p2 q2∗ ) from the first sum. Since y is nonzero,
we must have s(q1 ) = s(p2 ) = v, and so y = p1 q1∗ p2 q2∗ . Since q1∗ p2 6= 0, Lemma 2.1.10
tells us that either p2 = q1 γ or q1 = p2 τ for some paths γ, τ in E. For the former
case, we have y = p1 q1∗ (q1 γ)q2∗ = p1 γq2∗ . Since s(p2 ) = v and r(p2 ) = r(γ), we have
r(γ) ∈ T (v). Thus r(γ) ∈ H, by the hereditary nature of H. So, taking α = p1 γ
and β = q2 , we have y = αr(γ)β ∗ ∈ J. For the latter case, we have y = p1 τ ∗ q2∗ , and
a similar argument shows that again y ∈ J.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 98
Now take a nonzero term z = (p1 q1∗ )wH (p2 q2∗ ) from the second sum above.
Letting M = {e ∈ E 1 : s(e) = w, r(e) ∈ / H}, we can write z = (p1 q1∗ )(w −
∗ ∗ H
P
e∈M ee )(p2 q2 ) (from the definition of w ). Again, since z is nonzero we must
z = αwH β ∗ ∈ J.
Case 2: l(q1 ) = 0, l(p2 ) > 0. Let f be the initial edge of p2 , so that p2 = f p02 .
Thus z = p1 f p02 q2∗ − e∈M p1 ee∗ f p02 q2∗ . If r(f ) ∈
P
/ H then f ∈ M (since s(f ) = w),
and so using the fact that e∗ f = 0 for all e ∈ M such that e 6= f , we have
To see that I(H,S) is graded, note that each term αβ ∗ , where r(α) = r(β) ∈ H, is
homogeneous of degree |α| − |β|. Furthermore, for any v ∈ S, v H is by definition an
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 99
Thus Proposition 3.3.6 allows us to describe precisely the elements of an ideal I(H,S)
(or I(H)) in a relatively simple way. This will prove valuable in future results.
e
if r(e) ∈ (E 0 \H)\(BH \S)
φ(e) = e + e0 if r(e) ∈ BH \S
0 if r(e) ∈ H
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 100
and
∗
e
if r(e) ∈ (E 0 \H)\(BH \S)
φ(e∗ ) = e∗ + (e0 )∗ if r(e) ∈ BH \S
0 if r(e) ∈ H.
Extend φ linearly and multiplicatively. To begin, we must check that φ preserves the
Leavitt path algebra relations on LK (E), a rather technical and tedious procedure.
However, for the sake of completeness we will show this process in full for this
particular proof. For ease of notation, we set (E 0 \H)\(BH \S) = T .
First, we check that the (A1) relation holds, i.e. that φ(vi )φ(vj ) = δij φ(vi ) for
all vi , vj ∈ E 0 . We must examine several different cases:
Case 1: vi , vj ∈ T . Then φ(vi )φ(vj ) = vi vj = δij vi = δij φ(vi ).
Case 2: vi ∈ T, vj ∈ BH \S. Then φ(vi )φ(vj ) = vi (vj + vj0 ) = δij vi = δij φ(vi ) (we
know that vi 6= vj0 since vj0 ∈
/ LK (E)). A similar argument shows the relation holds
for vi ∈ BH \S, vj ∈ T .
Case 3: vi , vj ∈ BH \S. Then φ(vi )φ(vj ) = (vi + vi0 )(vj + vj0 ) = vi vj + vi0 vj0 =
δij (vi + vi0 ) = δij φ(vi ).
Case 4: Either vi or vj ∈ H. Then φ(vi )φ(vj ) = 0 = δij φ(vi ).
Next, we check that the (A2) relations hold. First, we check that φ(s(e))φ(e) =
φ(e) for all e ∈ E 1 .
Case 1: s(e), r(e) ∈ T . Then φ(s(e))φ(e) = s(e)e = e = φ(e).
Case 2: s(e) ∈ T, r(e) ∈ BH \S. Then φ(s(e))φ(e) = s(e)(e + e0 ) = e + e0 = φ(e),
since s(e0 ) = s(e).
Case 3: s(e) ∈ BH \S, r(e) ∈ T . Then φ(s(e))φ(e) = (s(e) + s(e)0 )e = s(e)e =
e = φ(e) (we know that s(e)0 e = 0 since every v 0 ∈ BH
0
is a sink).
Case 4: s(e), r(e) ∈ BH \S. Then φ(s(e))φ(e) = (s(e) + s(e)0 )(e + e0 ) = s(e)e +
s(e)e0 = e + e0 = φ(e).
Case 5: s(e) ∈ H. Then, since H is hereditary, r(e) ∈ H and so φ(s(e))φ(e) =
0 = φ(e).
Case 6: s(e) ∈ E 0 \H, r(e) ∈ H. Then φ(s(e))φ(e) = 0 = φ(e).
Next we check that the (CK1) relation holds, i.e. that φ(e∗i )φ(ej ) = δij φ(r(ei ))
for all ei , ej ∈ E 1 .
Case 1: r(ei ), r(ej ) ∈ T . Then φ(e∗i )φ(ej ) = e∗i ej = δij r(ei ) = δij φ(r(ei )).
Case 2: r(ei ) ∈ T, r(ej ) ∈ BH \S. Then φ(e∗i )φ(ej ) = e∗i (ej + e0j ) = δij r(ei ) =
δij φ(r(ei )) (we know that ei 6= e0j since e0j ∈
/ LK (E)). A similar argument shows that
the relation holds for r(ei ) ∈ BH \S, r(ej ) ∈ T .
Case 3: r(ei ), r(ej ) ∈ BH \S. Then φ(e∗i )φ(ej ) = (e∗i + (e0i )∗ )(ej + e0j ) = e∗i ej +
(e0i )∗ e0j = δij (r(ei ) + r(ei )0 ) = δij φ(r(ei )).
Case 4: Either r(ei ) or r(ej ) ∈ H. Then φ(e∗i )φ(ej ) = 0 = δij φ(r(ei )).
Finally, we check that the (CK2) relation holds, i.e. that φ(v − sE (e)=v ee∗ ) = 0
P
We now show that I(H,S) ⊆ ker(φ). By definition, I(H,S) is generated by the sets
{v : v ∈ H} and {v H : v ∈ S}, so it suffices to show that all such generating elements
are mapped to 0 under φ. We know that φ(v) = 0 for all v ∈ H. Now consider an
element v H , where v ∈ S. Then, using the same argument as we did when checking
the (CK2) relation, we have
!
X X X
∗
H
φ(v ) = φ v − ee =v− ee∗ − (ee∗ + e0 (e0 )∗ ),
sE (e)=v sE (e)=v sE (e)=v
r(e)∈H
/ r(e)∈T r(e)∈BH \S
(following the same argument as above). Once again, this contradicts that I(H,S) ⊆
ker(φ) and so {v ∈ BH : v H ∈ I(H,S) } = S, completing the proof.
Note that if we take S to be the empty set, the statement of Proposition 3.3.7
simplifies to I(H) ∩ E 0 = H for all hereditary saturated subsets H of E 0 .
Now we come to perhaps the most important result of this section, which shows
that, for any admissible pair (H, S) of a graph E, the quotient ring LK (E)/I(H,S) is
in fact isomorphic to the Leavitt path algebra of the quotient graph E\(H, S). This
powerful result is from [To, Theorem 5.7(2)]. Here we have greatly expanded the
proof for clarity.
Theorem 3.3.8. Let E be an arbitrary graph and let (H, S) be an admissible pair
of E. Then
LK (E)/I(H,S) ∼
= LK (E\(H, S)).
and
∗
e
if r(e) ∈ (E 0 \H)\(BH \S)
ϕ(e∗ ) = ϕ(r(e)) e∗ if r(e) ∈ BH \S
ϕ(r(e)0 ) e∗ if e = e0
as required. Checking that these relations are preserved ensures that ϕ∗ is indeed a
K-homomorphism.
I(H,S) , since I(H,S) is a two-sided ideal. However, this implies r(f ) ∈ H, a contra-
/ I(H,S) . Furthermore, if v 0 ∈ BH
diction, and so ϕ(v) ∈ 0
\S 0 then ϕ(v 0 ) = v H ∈
/ I(H,S) ,
since v H ∈ I(H,S) implies v ∈ S (again by Proposition 3.3.7), a contradiction since
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 105
Similarly, if r(e) ∈ (E 0 \H)\(BH \S), then e = ϕ(e) (and e∗ = ϕ(e∗ )). If r(e) ∈
BH \S, then by the above equation we have
Note that if we take S to be the empty set then Theorem 3.3.8 simplifies to
LK (E)/I(H) ∼
= LK (E|H).
So far we have been exclusively considering graded ideals of the form I(H,S) .
However, as the following theorem shows, any graded ideal of LK (E) is in fact of
the form I(H,S) for some admissible pair (H, S) of E. This result has been adapted
from [To, Theorem 5.7(1)].
Theorem 3.3.9. Let E be an arbitrary graph and let I be a graded ideal of LK (E).
If we let H = I ∩ E 0 and S = {w ∈ BH : wH ∈ I}, then I = I(H,S) .
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 106
Proof. Let I be a graded ideal of LK (E) and let H and S be the two sets de-
scribed above. Clearly we have I(H,S) ⊆ I, from the definition of I(H,S) . By The-
orem 3.3.8, there exists an isomorphism ϕ∗ : LK (E\(H, S)) → LK (E)/I(H,S) . Let
π : LK (E)/I(H,S) → LK (E)/I be the quotient map, so that π(x + I(H,S) ) = x + I.
Note that this map is well-defined, since I(H,S) ⊆ I. Consider πϕ∗ : LK (E\(H, S)) →
LK (E)/I. Now, since I is graded so too is LK (E)/I. Furthermore, both π and ϕ∗
are graded (by definition), and so πϕ∗ is also graded.
We wish to show that πϕ∗ (v) 6= 0 for any v ∈ (E\(H, S))0 . Note that πϕ∗ (v) =
π(ϕ(v) + I(H,S) ) = ϕ(v) + I, so it suffices to show that ϕ(v) ∈
/ I for all v ∈
(E\(H, S))0 . We proceed in a similar fashion to the proof of Theorem 3.3.8. Sup-
pose v ∈ (E 0 \H)\(BH \S). Then ϕ(v) = v ∈
/ I, since H = I ∩ E 0 (by definition)
∗
P
and v ∈/ H. Now suppose v ∈ BH \S. Then ϕ(v) = s(e)=v,r(e)∈H / ee . Suppose
that ϕ(v) ∈ I and choose a fixed edge f for which s(f ) = v and r(f ) ∈ / H. Then
f ∗ ϕ(v)f = s(e)=v,r(e)∈H ∗ ∗
P
/ f ee f = r(f ) ∈ I, since I is a two-sided ideal. However,
/ I. Furthermore, if v 0 ∈ BH
this implies r(f ) ∈ H, a contradiction, and so ϕ(v) ∈ 0
\S 0
then ϕ(v 0 ) = v H ∈
/ I, since v H ∈ I implies v ∈ S (by the definition of S), a contra-
diction as v ∈ BH \S. Therefore we have πϕ∗ (v) 6= 0 for any v ∈ (E\(H, S))0 , as
required.
Thus, since πϕ∗ is a graded homomorphism between two graded rings, we can
apply Theorem 2.2.13 to give that πϕ∗ is injective. Since ϕ∗ is an isomorphism, this
implies that π is injective. Thus π must be the identity map and so LK (E)/I(H,S) =
LK (E)/I and therefore I(H,S) = I, as required.
Note that if E is a row-finite graph, then E 0 cannot contain any breaking vertices
and so the set S in the statement of Theorem 3.3.9 will always be empty. Thus in
the row-finite case we have that I = I(H) for any graded ideal I of LK (E), where
H = I ∩ E 0.
Since Theorem 3.3.9 tells us that all graded ideals of LK (E) are of the form I(H,S)
for some admissible pair (H, S) of E, and Proposition 3.3.6 describes the structure
of such an ideal, we can now describe the structure of any graded ideal of LK (E).
We state this explicitly in the following corollary.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 107
Corollary 3.3.11. For an arbitrary graph E, the Jacobson radical J(LK (E)) = 0.
Proof. We know that LK (E) is Z-graded and that E 0 is a set of local units for
LK (E), with each element of E 0 homogeneous. Thus, by Lemma 1.1.8 we have
that J = J(LK (E)) is a graded ideal. Furthermore, Theorem 3.3.9 tells us that
J = J(H,S) , where H = J ∩ E 0 and S = {w ∈ B H : wH ∈ J}. However, by
Lemma 1.1.7 we know that J(R) cannot contain any nonzero idempotents, and
so H = ∅. By the definition of BH , we must also have that S = ∅, and thus
J(LK (E)) = 0.
We finish this section with a result that will prove useful when examining the
socle series of a Leavitt path algebra in Section 3.4. This proof is based on the
homomorphism φ : LK (E) → LK (E\(H, S)) that we defined in the proof of Propo-
sition 3.3.7, as well as the isomorphism given in Theorem 3.3.8. This result is stated
in a simpler form in [ARM1, Theorem 1.7(ii)] and the reader is referred to Tom-
forde’s [To, Theorem 5.7]. However, Tomforde does not prove this result explicitly,
and so we provide details of the proof here.
Proof. Recall the homomorphism φ : LK (E) → LK (E\(H, S)) from the proof of
Proposition 3.3.7. To show that φ is an epimorphism, it suffices to show that φ
maps onto the set of generators of LK (E\(H, S)); that is, each vertex, edge and
ghost edge of LK (E\(H, S)) is in the image of φ. We begin by checking the vertices.
Case 1: v ∈
/ BH \S. Then φ(v) = v.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 108
Case 2: v 0 ∈ BH
0
\S 0 . Then we have φ(v H ) = v 0 (see the final paragraph of the
proof of Proposition 3.3.7).
Case 3: v ∈ BH \S. Then φ(v − v H ) = (v + v 0 ) − v 0 = v.
Similar arguments show that the ghost edges of E\(H, S) are also in the image
of φ. Thus φ is an epimorphism, as required.
Note that this proof relies on the fact that we already know that LK (E)/I(H,S) ∼
=
LK (E\(H, S)) from Theorem 3.3.8. If we could show that ker(φ) = I(H,S) directly,
then this would also prove LK (E)/I(H,S) ∼
= LK (E\(H, S)), making Theorem 3.3.8
redundant. However, while we can easily show that I(H,S) ⊆ ker(φ), it is not clear
how to show that ker(φ) ⊆ I(H,S) without appealing to Theorem 3.3.8.
0 = S0 ≤ S1 ≤ · · · ≤ Sα ≤ Sα+1 ≤ · · · (α < τ )
For each α < τ , Sα is called the α-th left socle of R (and in particular, S1 =
socl (R)). The least ordinal λ for which Sλ = Sλ+1 is called the left Loewy length
of R, denoted l(R). If R = Sα for some α, then R is said to be a left Loewy ring
(of length α).
Starting with the right socle of R, we can define the right socle series of R (and
related terms) similarly.
Although the left and right socle series may differ in general, we will show in
Corollary 3.4.8 that they coincide for Leavitt path algebras. (Note that we already
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 110
know socl (LK (E)) = socr (LK (E)) by Corollary 3.2.2.) Thus, since we will hence-
forth only be concerned with the socle series of Leavitt path algebras, there is no
need to specify ‘left’ or ‘right’ when using terms related to the socle series.
In this section we give several results regarding the socle series of an arbitrary
Leavitt path algebra LK (E). In Theorem 3.4.7 we describe the α-th socle of LK (E)
for all ordinals α, and describe precisely when LK (E) is a Loewy ring of length λ.
Furthermore, in Theorem 3.4.12 we show that for any ordinal λ there exists a graph
E for which LK (E) is a Loewy ring of length λ.
Example 3.4.2. We begin by examining the socle series of some familiar Leavitt
path algebras.
(i) The finite line graph Mn . We saw in Example 3.2.13 that soc(LK (Mn )) =
S1 = LK (Mn ). Thus LK (Mn ), and therefore Mn (K), is a Loewy ring of length 1
(for all n ∈ N).
(ii) The rose with n leaves Rn . In Example 3.2.13 we showed that soc(LK (Rn )) =
S1 = 0. By definition S2 /S1 = soc(LK (Rn )/S1 ), and so S2 = soc(LK (Rn )) = 0.
Thus Sα = 0 for all ordinals α, and in particular LK (Rn ), and therefore L(1, n), is
certainly not a Loewy ring for any n ∈ N.
a •O v1 v2
•
w;
www
w
ww
ww
•u GG / •v3
GG
(∞) GG
GG
G#
v4
•
We now look at a new example that will be integral to the proof of Theo-
rem 3.4.12. This example is a combination of Examples 2.1, 2.5, 2.6 and 2.7 from
[ARM1].
P0 : •v
In general, we construct the graph Pi+1 from the graph Pi by adding vertices
{v1+1,j : j ∈ N} and, for each j ∈ N, an edge from vi+1,j to vi+1,j+1 and an edge
from vi+1,j to vi,1 .
Now, LK (P0 ) ∼
= K, and so soc(LK (P0 )) = LK (P0 ). Thus LK (P0 ) is a Loewy
ring with l(LK (P0 )) = 1. In the graph P1 , every vertex is a line point, and so by
Theorem 3.2.11 we have soc(LK (P1 )) = I((P1 )0 ) = LK (P1 ). Thus LK (P1 ) is also a
Loewy ring with l(LK (P1 )) = 1.
For the graph P2 , the set of line points is the top row of vertices H = {v1,j :
j ∈ N}. Note that H is both hereditary and saturated. Thus soc(LK (P2 )) = I(H).
Furthermore, note that the quotient graph P2 |H consists of the ‘bottom row’ of
vertices and edges and is clearly isomorphic as a graph to P1 . Thus, by Theorem 3.3.8
we have
LK (P2 )/I(H) ∼
= LK (P2 |H) ∼
= LK (P1 ),
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 112
and since LK (P1 ) is a Loewy ring with l(LK (P1 )) = 1, LK (P2 ) is therefore a Loewy
ring with l(LK (P2 )) = 2.
Using induction, it is easy to see that LK (Pn ) is a Loewy ring with l(LK (Pn )) = n
for all n ∈ N: to begin, we know this statement is true for n = 1 and n = 2. Now
assume it is true for n = i and consider the graph Pi+1 . The line points of Pi+1 are
again the set H = {v1,j : j ∈ N}, and Pi+1 |H is isomorphic to Pi . Thus, as above,
LK (Pi+1 |H) ∼
= LK (Pi ), and since LK (Pi ) is a Loewy ring with l(LK (Pi )) = i (by
our assumption), LK (Pi+1 ) is therefore a Loewy ring with l(LK (Pi+1 )) = i + 1, as
required.
We now define a sequence of graphs Qn that are very similar to the graphs
Pn , except for one subtle but important difference. This example is from [ARM1,
Example 2.8].
As in the previous example, we now add a second ‘infinite line’ graph, but this
time we connect the two rows of vertices from the lower to the upper by adding an
edge from w1,j to w2,1 for each j ∈ N, giving the graph
In general, we construct the graph Qi+1 from the graph Qi by adding vertices
{w1+1,j : j ∈ N} and, for each j ∈ N, an edge from wi+1,j to wi+1,j+1 and an edge
from wi,j to wi+1,1 . Contrast this with the construction of Pi+1 , in which we add
an edge from vi+1,j to vi,1 for each j ∈ N. Despite this difference, it is clear that
the graph Qi is isomorphic to the graph Pi for each i ∈ N. Thus the Leavitt path
algebra LK (Qn ) is a Loewy ring with l(LK (Qn )) = n for all n ∈ N.
Once again, viewing Qi as being contained in Qi+1 for each i ∈ N, we can form
S
the graph Qω = i<ω Qi . This is where the two examples diverge. For each i ∈ N,
Pl (Pi ) = {v1,j : j ∈ N} (which is independent of i) and so Pl (Pω ) = {v1,j : j ∈ N}.
However, Pl (Qi ) = {wi,j : j ∈ N} for each i ∈ N, and so Qω has no line points.
Therefore soc(LK (Qω )) = {0}, and so Sα = {0} for each α. Thus, while LK (Pω ) is
a Loewy ring, its counterpart LK (Qω ) is not.
We now give a definition that is an integral part of Theorem 3.4.7. This definition
is from [ARM1, Definition 3.1], although this version differs from the published
version for reasons that will be explained after the proof of Theorem 3.4.7.
Definition 3.4.5. Let E be an arbitrary graph and let LK (E) be its associated
Leavitt path algebra. Recall the definitions of the quotient graph E\(H, S) and v H
from Section 3.3. For each ordinal γ, we define transfinitely a subset Vγ of E 0 as
follows.
Suppose γ > 1 is any ordinal and that the sets Vα have been defined for all α < γ.
Let Sα denote the α-th socle of LK (E) and define Bα := {w ∈ BVα : wVα ∈ Sα }.
0
(ii) If γ = α + 1 is a non-limit ordinal, then Vγ = E 0 ∩ I(Vα+1 ), defining
0
Vα+1 = Vα ∪ Wα ∪ Zα ,
where
and ( )
X
Zα = v− ee∗ : v ∈ BVα \Bα
s(e)=v,
r(e)∈V
/ α
S
(iii) If γ is a limit ordinal, then Vγ = α<γ Vα .
Lemma 3.4.6. Each subset Vγ (as defined in Definition 3.4.5) is a hereditary sat-
urated subset of E 0 .
Proof. We know that the set of line points of E must be a hereditary subset of E 0
since, given a vertex v ∈ Pl (E), every vertex w ∈ T (v) must also be a line point, by
definition. Thus V1 , the hereditary saturated closure of Pl (E), must be a hereditary
saturated subset of E 0 .
0 0
If γ is a non-limit ordinal, then Vγ = E 0 ∩ I(Vα+1 ), where I(Vα+1 ) is as defined
0
above. Since I(Vα+1 ) is an ideal, Vγ must be a hereditary saturated subset, by
Lemma 2.2.1.
For the case where γ is a limit ordinal, take a vertex v ∈ Vγ and a vertex
S
w ∈ T (v). Since Vγ = α<γ Vα , we must have v ∈ Vα for some α < γ, and since Vα
is hereditary, we have w ∈ Vα and so w ∈ Vγ . Now suppose that u is a regular vertex
in E 0 such that, for each ei ∈ s−1 (u), we have r(ei ) ∈ Vγ . Since Vα ⊆ Vα+1 for each
α < γ, there must exist some α < γ for which r(ei ) ∈ Vα for all ei ∈ s−1 (u). Then,
since Vα is saturated, we must have that u ∈ Vα and thus u ∈ Vγ , as required.
We now come to the main result of this section. This theorem is from [ARM1,
Theorem 3.2], although it differs from the published version, which the author found
to be incorrect for a number of reasons. After correspondence with one of the authors
of the paper, the theorem was adjusted to the current version below. The differences
between versions and why the changes were made will be discussed after the proof.
The proof has also been expanded to clarify some of the arguments used.
Theorem 3.4.7. Let E be an arbitrary graph and let LK (E) be its associated Leavitt
path algebra. For each ordinal α, let Sα denote the α-th socle of LK (E), and let Vα
and Bα be the subsets of E 0 and BVα , respectively, defined in Definition 3.4.5. Then
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 115
(iv) LK (E)/Sα ∼
= LK (E\(Vα , Bα )) as graded K-algebras for each α; and
(v) LK (E) is a Loewy ring of length λ if and only if λ is the smallest ordinal such
that E 0 = Vλ .
Let Wα = {v1 , v2 , . . . , w1 , w2 , . . .}, where each vi ∈ (E 0 \Vα )\(BVα \Bα ) and each
wi ∈ BVα \Bα . Thus
ee∗ ) = u0i
P
Recalling from the proof of Proposition 3.3.7 that φ(ui − s(e)=ui ,r(e)∈V
/ α
for each ui ∈ BVα \Bα , we also have φ(Zα ) = BV0 α \Bα0 . Thus
0
φ(Vα+1 ) = {v1 , v2 , . . . , w1 + w10 , w2 + w20 , . . .} ∪ (BV0 α \Bα0 ).
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 116
0
Now, since each wi0 ∈ BV0 α \Bα0 , each wi = (wi + wi0 ) − wi0 ∈ I(φ(Vα+1 )), and so
0
I(φ(Vα+1 )) = I({v1 , v2 , . . . , w1 , w2 , . . .} ∪ (BV0 α \Bα0 )) = I(Wα ∪ (BV0 α \Bα0 )).
By definition, Wα is the set of all line points in E\(Vα , Bα ) that are also vertices in
the original graph E. Furthermore, the only new vertices introduced into E\(Vα , Bα )
are the set BV0 α \Bα0 , which are sinks (and therefore line points) by definition. Thus
Pl (E\(Vα , Bα )) = Wα ∪ (BV0 α \Bα0 ) and so, by Theorem 3.2.11,
Now, by our induction hypothesis we have I(Vα ,Bα ) = Sα , and so by Theorem 3.3.8
we have LK (E)/Sα ∼= LK (E\(Vα , Bα )). Specifically, the function φ̄ : LK (E)/Sα →
LK (E\(Vα , Bα )) with φ̄(x + Sα ) = φ(x) is an isomorphism. Thus, from the socle
series definition we have
0
Thus φ̄(Sα+1 /Sα ) = φ(I(Vα+1 )), and so
0 0
giving Sα+1 = I(Vα+1 ). Thus Vα+1 = I(Vα+1 ) ∩ E 0 = Sα+1 ∩ E 0 , proving (ii).
Thus we have shown properties (i)-(iv) for when γ is not a limit ordinal. If γ is a
S S
limit ordinal, then by definition Sγ = α<γ Sα and Vγ = α<γ Vα . Since each Sα is
graded, Sγ is also graded, proving (i). Furthermore, if Vα = Sα ∩ E 0 for each α < γ
then it follows that Vγ = Sγ ∩ E 0 , proving (ii). As above, the fact that Sγ = I(Vγ ,Bγ )
follows from (i) and (ii) and the definition of Bγ , and (iv) follows directly from (iii)
and Theorem 3.3.8. Thus we have established (i)-(iv) for all γ.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 117
Finally, note that LK (E) is a Loewy ring of length λ if and only if λ is the smallest
ordinal for which Sλ = LK (E), by definition. By Lemma 2.2.3, Sλ = LK (E) if and
only if Sλ ∩ E 0 = E 0 , that is, Vλ = E 0 (by (ii)). Thus LK (E) is a Loewy ring of
length λ if and only if λ is the smallest ordinal for which Vλ = E 0 , proving (v).
The primary error in the original proof of [ARM1, Theorem 3.2] was the assump-
tion that Sα = I(Vα ) rather than Sα = I(Vα ,Bα ) . While the property Sα = I(Vα ) was
not stated explicitly in the theorem itself, the assumption is implied when [ARM1,
Theorem 1.7(ii)] is invoked to give LK (E)/Sα ∼ = LK (E|Vα ) during the induction
process. As shown in the proof above, the fact that Vα = E 0 ∩ Sα (together with
Theorem 3.3.9) implies directly that Sα = I(Vα ,Bα ) , and I(Vα ,Bα ) 6= I(Vα ) unless
Bα = ∅, which is not true in general. Thus we have changed the proof of Theo-
rem 3.4.7 accordingly and have added the statement Sα = I(Vα ,Bα ) as property (iii) for
clarity. Furthermore, [ARM1, Theorem 3.2(3)] states that ‘LK (E)/Sα ∼ = LK (E|Vα )
as graded K-algebras for each K’; here we have changed that to ‘LK (E)/Sα ∼
=
L(E\(Vα , Bα )) as graded K-algebras for each α’ in property (iv).
and that this definition allows us to conclude in the proof of Theorem 3.4.7 that
Pl (E\(Vα , Bα )) = Wα ∪ (BV0 α \Bα0 ), an equality that is central to the proof. In
[ARM1, Definition 3.1], the corresponding set is defined as
However, such vertices will not necessarily be line points in the quotient graph
E\(Vα , Bα ), since there is the possibility that a new edge e0 with s(e0 ) = u ∈
TE (w)\Vα will be added in the construction of E\(Vα , Bα ), making u a bifurcation
in the quotient graph. Hence we have modified the definition in our version.
As promised at the beginning of this section, we now show that the left and right
socle series of a Leavitt path algebra coincide.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 118
Corollary 3.4.8. Let E be an arbitrary graph. For any ordinal α < 2|LK (E)| , the
α-th left socle of LK (E) is equal to the α-th right socle of LK (E).
Proof. We proceed using transfinite induction. For ease of notation we will denote
α-th left socle of LK (E) by Sα and the α-th right socle of LK (E) by Tα . For the
case α = 1, we have S1 = socl (LK (E)) = socr (LK (E)) = T1 , by Corollary 3.2.2.
Now let 1 < α < 2|LK (E)| and suppose that Sβ = Tβ for all β < α. Moreover,
suppose that α = β+1, where β is not a limit ordinal. Then, applying Corollary 3.2.2
and Theorem 3.4.7 (iv), we have
We now proceed with several ring-theoretic results related to the socle series of
a Leavitt path algebra. Because some of these results rely on Theorem 3.4.7, the
proofs have had to be subtly adjusted. However, these adjustments have not led to
any changes in the results themselves. The first result is from [ARM1, Proposition
3.3].
Proposition 3.4.9. Let E be an arbitrary graph and let Sα be the α-th socle of
LK (E). Each Sα is a von Neumann regular ring.
Proof. It is known (see for example [J2, pages 65, 90]) that for a semiprime ring R,
soc(R) is a direct sum of simple rings Ti and that each Ti is the directed union of
full matrix rings over division rings. By the remark on p.67 of [L1], a matrix ring
over a division ring is von Neumann regular, and thus a directed union of matrix
rings over division rings must be von Neumann regular. Since soc(R) is a direct sum
of such rings, it must also be von Neumann regular. Now we know that LK (E) is
semiprime by Proposition 3.2.1, and so S1 = soc(LK (E)) is von Neumann regular.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 119
Proposition 3.4.9, together with the yet-to-come Theorem 4.2.3, leads to the
following corollary.
Proof. If LK (E) is a Loewy ring then LK (E) = Sα for some α. Thus LK (E) is von
Neumann regular (by Proposition 3.4.9) and so, by Theorem 4.2.3, E is acyclic and
LK (E) is locally K-matricial.
Note that the converse is not true: recall the graph Qω from Example 3.4.3,
which was acyclic but not a Loewy ring since Sα = {0} for all α. However, the
following corollary (from [ARM1, Corollary 3.5]) shows that we have equivalence
when E 0 is finite.
Corollary 3.4.11. Let E be a graph for which E 0 is finite. The following statements
are equivalent.
(ii) E is acyclic
(iv) LK (E) is semisimple (in this case, we have l(LK (E)) = 1).
Proof. Theorem 4.2.3 gives (ii) ⇐⇒ (iii), while Corollary 3.4.10 gives (i)⇒(ii). To
show (ii)⇒(i), suppose that E is acyclic. Since E 0 is finite, this implies that E must
contain at least one sink, and so Pl (E) 6= ∅. Recall from Definition 3.4.5 that V1 =
Pl (E) 6= ∅. If V1 = E 0 then LK (E) is a Loewy ring (of length 1) by Theorem 3.4.7
(v). If not, then we can form the quotient graph E\(V1 , B1 ). This graph must also
be acyclic, since the only edges added in the construction of the quotient graph end
in sinks, by definition. Now, if the added vertices v 0 ∈ BV0 1 \B10 are the only sinks in
E\(V1 , B1 ), then the graph (E\(V1 , B1 )) \ ((BV0 1 \B10 )∪{e0 ∈ (E\(V1 , B1 ))1 }) contains
no sinks, a contradiction since this is also a finite and acyclic graph. Thus E\(V1 , B1 )
must contain a vertex from the original graph E that is a sink in E\(V1 , B1 ).
By our observation above, W1 6= ∅ and so V1 ⊂ V2 , giving |V2 | > |V1 |. Again, either
V2 = E 0 , in which case we are done, or we can repeat the above argument to show
that |V3 | > |V2 |, and so on. Since E 0 is finite, this ascending chain of subsets of E 0
must stop, eventually giving Vn = E 0 for some n ∈ N. Thus LK (E) is a Loewy ring
by Theorem 3.4.7 (v).
Now suppose that E 1 is finite and E is acyclic. Then, by Lemma 2.2.9, LK (E) is
isomorphic to a direct sum of matrix rings over K. Since each matrix ring is simple
(by Lemma 1.1.10), LK (E) is therefore the direct sum of simple left ideals and so is
semisimple, showing (ii)⇒(iv). If LK (E) is semisimple, then soc(LK (E)) = LK (E)
and so l(LK (E)) = 1, as required. Thus LK (E) is a Loewy ring, showing (iv)⇒(i)
and completing the proof.
We now come to the second main result of this section, which is from [ARM1,
Theorem 4.1].
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 121
Theorem 3.4.12. For every ordinal λ and any field K, there exists an acyclic graph
Pλ for which LK (Pλ ) is a Loewy ring of length λ.
Proof. We construct a series of graphs Pn that transfinitely extends the series intro-
duced in Example 3.4.3. For λ = 1, we choose E = P1 , the ‘infinite line’ graph
P1 is clearly acyclic.
Now suppose that λ ≥ 2 is any ordinal, and suppose that the graphs Pα have
been defined for all α < λ and that each Pα has Loewy length α. There are three
possibilities for λ. First, suppose that λ = α + 1, where α is not a limit ordinal.
Then, in a similar manner to Example 3.4.3, we construct the graph Pα+1 from Pα
by adding vertices {vα+1,j : j ∈ N} and, for each j ∈ N, an edge from vα+1,j to
vα+1,j+1 and an edge from vα+1,j to vα,1 , giving
O fj l
Pα+1 : Pα ∪ O
(∞)
•vα+1,1
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 122
Note that in each case Pλ is acyclic (as required) and Pα is a subgraph of Pλ for
all α < λ, giving a chain of subgraphs P1 ⊂ P2 ⊂ · · · ⊂ Pλ−1 ⊂ Pλ .
We now show by transfinite induction that l(LK (Pα )) = α for each ordinal α.
We do this by showing that α is the smallest ordinal for which Vα = Pα0 and then
applying Theorem 3.4.7 (v). For α = 1, we have V1 = Pl (P1 ) = P10 , since every
vertex in P1 is a line point.
Now let λ be any ordinal greater than 1 and suppose that LK (Pα ) is a Loewy
ring of length α for all α < λ. Since each Pα can be viewed as a subgraph of Pλ , this
is equivalent to assuming that Vα = Pα0 for each α < λ, where each Vα is a subset of
Pλ0 .
Suppose that λ = β + 1, where β is not a limit ordinal. Now Vβ = Pβ0 , and since
there are no infinite emitters going into Vβ we have BVβ = ∅. Thus it is easy to see
that the definition of Vλ simplifies to Vλ = E 0 ∩ I(Vβ ∪ Wβ ), where Wβ is the set
We finish this section with a result from [ARM1, Theorem 4.2] that shows there
exists an upper bound on the Loewy length of the Leavitt path algebra of a row-
finite graph. The proof of this theorem refers to the set Wα from Definition 3.4.5.
1
Recall from Example 3.4.3 that P0 is the graph consisting of a single vertex and no edges.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 123
As noted earlier, this definition is different from the one seen in [ARM1, Definition
3.1] and so we have had to modify the following proof accordingly.
Theorem 3.4.13. Suppose that the Leavitt path algebra LK (E) is a Loewy ring. If E
is a row-finite graph then LK (E) must have Loewy length ≤ ω1 , the first uncountable
ordinal.
Proof. Suppose that LK (E) has Loewy length greater than ω1 . Let Sω1 be the ω1 -st
socle of LK (E) and let Vω1 be the set of vertices defined in Definition 3.4.5. Recall
from the definition that Vω1 +1 contains the set
Let U = {u1 , u2 , . . .} be the set of bifurcation vertices in TE (w) that are also
contained in TE|Vω1 (w) (though indeed they are not bifurcations in E|Vω1 since w
is a line point in E|Vω1 ). Since E is row-finite, for a fixed positive integer n the
number of paths of length n with source w is finite, and so the number of vertices
in TE (w) is at most countable and thus |U | is at most countable. Furthermore, U
is not empty. To see this, suppose that U is empty and let p be a path of minimum
length in E from w to a bifurcation u ∈ TE (w). Since the only vertices removed
in the construction of E|Vω1 are in the set Vω1 , if u ∈
/ TE|Vω1 (w) then there must
be a vertex v ∈ p0 for which v ∈ Vω1 (noting that we may have v = u). However,
since there are no bifurcations between w and v, the saturated nature of Vω1 implies
that w ∈ Vω1 , a contradiction. Thus U is not empty. Note that, by definition, each
ui ∈
/ Vω1 .
emits at least one edge into Vω1 . For each ui ∈ U , let s−1 (ui ) = {ei1 , . . . , eik(i) } (a
finite set since E is row-finite), and define
Note that each of these sets is nonempty since each ui emits at least one edge into
Vω1 , as explained above. From the definition of Ji we have r(Ji ) ⊆ Vω1 . Thus, since
S
Vω1 = α<ω1 Vα , for each ui ∈ U we have r(Ji ) ⊆ Vαi for some αi < ω1 .
S
Let γ = sup{αi : i = 1, 2, . . .}, so that ui ∈U r(Ji ) ⊆ Vγ (noting that γ < ω1 ,
since U is countable). Thus the quotient graph E|Vγ contains none of the edges in
0
S
ui ∈U Ji . Since each ui emits at most one edge into E \Vω1 , and therefore at most
one edge into E 0 \Vγ , each ui must be a line point in E|Vγ . Thus, by Theorem 3.2.11,
each ui ∈ soc(LK (E|Vγ ). Recall the definition of φ : LK (E) → LK (E|Vγ ) from
Proposition 3.3.7. Since each ui ∈
/ Vω1 we have ui ∈
/ Vγ and so φ(ui ) = ui . Letting
φ̄ : LK (E)/Sγ → LK (E|Vγ ) be the isomorphism defined by φ̄(x + Sγ ) = φ(x), we
therefore have φ̄−1 (ui ) = ui + Sγ . Thus we have
Since each ui ∈ soc(LK (E|Vγ )), we have ui +Sγ ∈ Sγ+1 /Sγ . Thus each ui ∈ Sγ+1 ,
and so ui ∈ Sγ+1 ∩ E 0 = Vγ+1 ⊆ Vω1 , contradicting the fact that each ui ∈
/ Vω1 . Thus
LK (E) has Loewy length ≤ ω1 , as required.
One may be tempted to think that ω, the first countable ordinal, would be
an upper bound for the Loewy length of LK (E) in the case that E is row-finite.
However, [ARM1, Example 4.3] constructs a series of row-finite graphs Pα for which
the Loewy length of LK (Pα ) = α for each α < ω1 , thus showing that ω1 is indeed
the best possible upper bound.
Chapter 4
In this chapter we define various notions of ‘regularity’ for a ring and examine Leavitt
path algebras with these properties in Sections 4.2 and 4.3. Furthermore, in Section
4.4 we examine Leavitt path algebras that are self-injective; that is, injective as
left (or right) modules over themselves. To begin, we define the construction of a
particular K-subalgebra of a Leavitt path algebra that will be integral to proving
our main result in Section 4.2.
125
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 126
and similarly r(F ) to be the set of all vertices that are the range of at least one edge
in F . We construct a new graph, EF , in two parts. First, we define the vertices:
EF0 = F ∪ WF ∪ ZF ,
where
WF = r(F ) ∩ s(F ) ∩ s(E 1 \F ) and ZF = r(F )\s(F ).
In other words, each edge in F becomes a vertex in our new graph. In addition, we
include all vertices which are both the source and range of at least one edge in F as
well as the source of at least one edge that is not in F (the set WF ), as well as all
vertices that are the range of at least one edge in F but not the source of any edge
in F (the set ZF ). Now we define the edges of EF :
following the convention that s(v) = v when x = v is a vertex from our original
graph E (i.e. when x ∈ WF ∪ ZF ⊆ EF0 ). In other words, EF1 is the set of ordered
pairs (f, x) of edges f ∈ F and vertices x ∈ EF0 for which either f x forms a path in
our original graph E (if x ∈ F ), or x is the range vertex for f in E (if x ∈ WF ∪ ZF ).
Note that, since F is finite, the graph EF must also be finite. Also, any vertices
in the sets WF or ZF become sinks in our new graph. We illustrate the construction
of EF with the following example.
v2
•
e2 yy
y<
y
yy
e1 yy
E= •v0 / •v1
EE
EE
E
e3 EEE
"
•v3
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 127
and let F be the set of edges {e1 , e2 }. Then WF = {v1 } and ZF = {v2 }, and so
EF0 = {e1 , e2 , v1 , v2 }. Thus EF1 = {(e1 , e2 ), (e1 , v1 ), (e2 , v2 )}, and so we have
(e1 ,e2 )
•e1 / •e2
•v 1 •v 2
Lemma 4.1.3. Let E be an acyclic graph. Then, for any finite subset F of E 1 , the
graph EF is acyclic.
∗
ee P
if w = e ∈ F
φ(w) = w − f ∈F,s(f )=w f f ∗ if w ∈ WF
w if w ∈ ZF ,
∗
ef f P
if h = (e, f ), f ∈ F
φ(h) = e − f ∈F,s(f )=r(e) ef f ∗ if h = (e, r(e)), r(e) ∈ WF
e if h = (e, r(e)), r(e) ∈ ZF ,
and
φ(h∗ ) = (φ(h))∗ for all h∗ ∈ (E 1 )∗
Note that e∗i ej appears in every term in the above expression, which simplifies
to δij (r(ei )) by the (CK1) relation in LK (E). Note also that the (CK1) relation
simplifies the last term in the above expression to
! ! !
X X X
δij fi fi∗ fi fi∗ = δij fi fi∗ .
fi ∈F, fi ∈F, fi ∈F,
s(fi )=r(ei ) s(fi )=r(ei ) s(fi )=r(ei )
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 129
= δij φ(r(ei ))
as required. Similar calculations can be made for the other Leavitt path algebra
relations, and for each subcase contained within. We are now ready to show that
properties (i), (ii) and (iii) hold for our definition of φ.
which is a finite set since each gi ∈ F and F is finite. For each gi ∈ s−1
E (r(f )) we
have f gi gi∗ = φ((f, gi )), and so , applying the (CK2) relation, we have f = f r(f ) =
∗ ∗
P P P
f i gi gi = i f gi gi = i φ((f, gi )) ∈ Im(φ).
Now suppose that r(f ) is not a sink and emits edges only into E 1 \F . This
again implies that r(f ) ∈ r(F )\s(F ) = ZF and so f = φ((f, r(f )). Thus the
only remaining case is that r(F ) emits edges into both F and E 1 \F . In this case,
r(f ) ∈ r(F ) ∩ s(F ) ∩ s(E 1 \F ) = WF . Let {g1 , . . . , gm } be the subset of edges in F
for which s(gi ) = r(f ). As above, we have f gi gi∗ = φ((f, gi )) for each gi . Thus
Lemma 4.1.5. Let E be a graph, let F be a finite subset of E 1 and let φ : LK (EF ) →
LK (E) be the homomorphism defined in Proposition 4.1.4. If E is acyclic, then φ
is a monomorphism.
Proof. Recall that, for each w ∈ EF0 , we have φ(w) = ee∗ if w = e ∈ F , φ(w) = w if
w ∈ ZF and φ(w) = w − f ∈F,s(f )=w f f ∗ if w ∈ WF . For the former two cases, it is
P
clear that φ(w) 6= 0; for the latter case, recall that WF = r(F ) ∩ s(F ) ∩ s(E 1 \F ),
so that w emits at least one edge that is in E 1 \F and thus φ(w) 6= 0 by the
(CK2) relation. Therefore φ(v) 6= 0 for every vertex v ∈ EF0 . If E is acyclic, then
Lemma 4.1.3 gives that EF is acyclic, so it is trivially true that φ maps each cycle
without exits to a non-nilpotent homogeneous element of nonzero degree. Thus, by
Theorem 2.2.15, φ is a monomorphism.
where each kri , lrj is a nonzero element of K, each vri ∈ E 0 and each prj , qrj ∈ E ∗ .
Additionally, for each j ∈ {1, . . . , t(r)}, at least one of prj or qrj has length 1 or
greater (since the case in which both paths have zero length is covered in the first
sum).
Let F denote the set of edges that appear in the representation of some prj or
qrj for 1 ≤ j ≤ t(r), 1 ≤ r ≤ n. Furthermore, let S be the set of vertices
S = vr1 , . . . , vrs(r) : 1 ≤ r ≤ n .
Thus F , F ∗ and S are the sets of all edges and vertices, respectively, that appear in
the representation of our elements in X. Note that both F and S must be finite.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 131
S1 = S ∩ r(F ),
S2 = {v ∈ T : s−1 −1
E (v) ⊆ F and sE (v) 6= ∅},
S3 = {v ∈ T : s−1
E (v) ∩ F = ∅}, and
S4 = {v ∈ T : s−1 −1 1
E (v) ∩ F 6= ∅ and sE (v) ∩ (E \F ) 6= ∅}.
In other words, S1 is the set of all vertices in S that are the range of some edge in
F . For those vertices in S that are not the range of some edge in F , we then have
three cases: vertices that emit edges only into F , vertices that emit no edges into
F , and vertices that emit edges into both F and E 1 \F ; these three cases make up
the subsets S2 , S3 and S4 , respectively.
Finally, let EF be the graph corresponding to our set of edges F , as defined in
Definition 4.1.1, and let φ : LK (EF ) → LK (E) be the homomorphism defined in the
proof of Proposition 4.1.4. We are now ready to construct our subalgebra B(X) of
LK (E), as defined in [AR, Definition 3].
(i) X ⊆ B(X);
L L
(ii) B(X) = Im(φ) ⊕ vi ∈S3 Kvi ⊕ wj ∈S4 Kuwj ;
(iv) LK (E) = −
lim
→{X⊆LK (E), X finite} B(X).
Proof. To prove (i), recall that the set X is generated by the subsets F , F ∗ and
S = S1 ∪ S2 ∪ S3 ∪ S4 , as defined above. By Proposition 4.1.4, we have F ∪ F ∗ ⊆
Im(φ) ⊆ B(X) (property (i)), S1 ⊆ r(F ) ⊆ Im(φ) ⊆ B(X) (property (ii)) and
S2 ⊆ Im(φ) ⊆ B(X) (property (iii)). Finally, S3 ∪ S4 ⊆ B(X), by definition, and so
X ⊆ B(X), as required.
= δij uwi
P L
as required. Thus wj ∈S4 Kuwj = wj ∈S4 Kuwj .
L L
We now show that the sum Im(φ) + vi ∈S3 Kvi + wj ∈S4 Kuwj is direct.
L
We begin by showing that vi ∈S3 Kvi ∩ Im(φ) = {0}. Let v ∈ S3 . By the
definition of S3 we have v ∈
/ r(F ) ∪ s(F ), and so v ∈
/ WF ∪ ZF . We now show that
v is orthogonal to each element φ(x), where x ∈ (EF0 ) ∪ (EF1 ) ∪ (EF1 )∗ . If x = e ∈ F ,
then v · φ(x) = vee∗ = 0, since v ∈
/ s(F ). If x = w ∈ WF , then v 6= w (since
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 133
f f ∗ ) = 0. If x = w ∈ ZF , then again
P
v∈
/ WF ) and so v · φ(x) = v(w − f ∈F,s(f )=w
v 6= w (since v ∈
/ ZF ) and so v · φ(x) = vw = 0. Similarly, it is easy to see that
φ(x) · v = 0 for each of the above three cases. Thus v is orthogonal to each element
in φ(EF0 ). Now suppose h = (x, y) ∈ EF1 . Then φ(h) = φ(xhy) = φ(x)φ(h)φ(y) and
φ(h∗ ) = φ(yh∗ x) = φ(y)φ(h∗ )φ(x). Since x, y ∈ EF0 , v is therefore orthogonal to
φ(h) and φ(h∗ ). Therefore v is orthogonal to each generator of Im(φ), and so Kvi is
orthogonal to Im(φ) for each vi ∈ S3 . Since each vi is an idempotent, we therefore
L
have vi ∈S3 Kvi ∩ Im(φ) = {0}, as required.
L
Now we show that wj ∈S4 Kuwj ∩Im(φ) = {0}. Let w ∈ S4 . By the definition
of S4 we have w ∈
/ r(F ), and so again w ∈
/ WF ∪ ZF . Again, we must show that uw
is orthogonal to each element φ(x), where x ∈ (EF0 ) ∪ (EF1 ) ∪ (EF1 )∗ . If x = e ∈ F ,
then uw · φ(x) = (w − f ∈F,s(f )=w f f ∗ )ee∗ = δw,s(e) ee∗ − δw,s(e) ee∗ ee∗ = 0, using the
P
it is easy to see that φ(x) · uw = 0 for each of the above three cases. Thus, using the
same logic as above, we have that uw is orthogonal to each generator of Im(φ), and
thus Kuwj is orthogonal to Im(φ) for each wj ∈ S4 . As shown above, {uwj : wj ∈ S4 }
L
is a set of pairwise orthogonal idempotents, and so wj ∈S4 Ku w j
∩ Im(φ) = {0}.
Now take v ∈ S3 and w ∈ S4 . Since S3 ∩ S4 = ∅, we have v 6= w and so v · uw =
v(w− f ∈F,s(f )=w f f ∗ ) = 0 = (w− f ∈F,s(f )=w f f ∗ )v = uw ·v. Thus
P P L
vi ∈S3 Kvi ∩
L
wj ∈S4 Kuwj = {0}. Therefore the three sets are mutually orthogonal, and so
L L L L
Im(φ)+ vi ∈S3 Kvi + wj ∈S4 Kuwj = Im(φ)⊕ vi ∈S3 Kvi ⊕ wj ∈S4 Kuwj ,
as required.
Now we need to show that this direct sum is indeed equal to B(X). For ease
L L
of notation, let Im(φ) ⊕ vi ∈S3 Kvi ⊕ wj ∈S4 Kuwj = A. It is clear that
L
Im(φ) ⊆ B(X) and vi ∈S3 Kvi ⊆ B(X), by definition. Let w ∈ S4 . Then for
show that each of its generating elements is contained in A. It is clear that Im(φ) ⊆ A
and S3 ⊆ A. Furthermore, if w ∈ S4 , then w = uw + f ∈F,s(f )=w f f ∗ ∈ A, since
P
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 134
Finally, let M = −→ {X⊆LK (E), X finite} B(X) (for ease of notation) and suppose
lim
that LK (E) 6= M . Then there must exist a finite subset X ⊆ LK (E) such that
X * M . However, since X ∈ B(X) (by (i)) and M is the limit of the upward-
directed set of all such subalgebras B(X) (by (iii)), this is impossible. Thus we have
LK (E) = M , as required, completing the proof.
(ii) R is said to be left π-regular (resp. right π-regular) if, for every x ∈ R, there
exist y ∈ R and n ∈ N such that xn = yxn+1 (resp. xn = xn+1 y).
It is clear that any ring R that is von Neumann regular is also π-regular since,
taking n = 1, for every x ∈ R there exists a y ∈ R such that xn = xn yxn . However,
the converse is not true. Consider, for example, the ring R = Z/4Z. Now R is
π-regular, since 2̄2 = 0̄ = 2̄2 1̄ 2̄2 and 3̄2 = 1̄ = 3̄2 1̄ 3̄2 . However, it is clear that 2̄
has no von Neumann regular inverse, and so R is not von Neumann regular.
(see for example [Ri, Example (c), p.131]) and is therefore π-regular. However, if we
let f : V → V be the shift transformation defined by f (x1 ) = 0 and f (xi+1 ) = f (xi )
for i > 1, then we have ker(f ) = Kx1 , ker(f 2 ) = Kx1 ⊕ Kx2 and in general
Ln
ker(f n ) = i=1 Kxi . If there were to exist a g ∈ R for which f
n
= gf n+1 , we
Ln+1
would have ker(gf n+1 ) ⊇ ker(f n+1 ) = n
i=1 Kxi ⊃ ker(f ), which is impossible.
Thus R is not strongly π-regular, and so in general the property π-regular does not
necessarily imply strongly π-regular.
The following lemma (from [AR, Lemma 2]) is useful in the context of Leavitt
path algebras.
Lemma 4.2.2. Let R be a ring with local units. Then R is strongly π-regular if and
only if the subring eRe is strongly π-regular, for every nonzero idempotent e ∈ R.
Proof. Suppose that R is strongly π-regular and let x ∈ eRe for some idempotent
e ∈ R. Since x is an element of R, there exist y, z ∈ R such that xn = yxn+1 and
xm = xm+1 z, for some m, n ∈ N. Furthermore, since x ∈ eRe we have x = xe = ex,
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 136
and so xn = exn = e(yxn+1 ) = eyexn+1 . Thus there exists an element y 0 = eye ∈ eRe
for which xn = y 0 xn+1 . Similarly, we can find an element z 0 = eze ∈ eRe such that
xm = xm+1 z 0 , and so eRe is strongly π-regular.
Conversely, suppose that eRe is strongly π-regular for every idempotent e ∈ R
and let x ∈ R. Since R has local units, there exists an idempotent f ∈ R such
that x ∈ f Rf . Since f Rf is strongly π-regular, there exist y, z ∈ f Rf for which
xn = yxn+1 and xm = xm+1 z, for some m, n ∈ N. However, since y, z are elements
of R, this implies that R is also strongly π-regular, completing the proof.
We now proceed to our main result for this section (from [AR, Theorem 1]),
which shows, perhaps surprisingly, that the properties von Neumann regular, π-
regular and strongly π-regular are equivalent for Leavitt path algebras. We also
finally show that LK (E) is locally matricial if and only if E is acyclic, a result
first mentioned in Section 2.2 (see page 56). Here we utilise the subalgebra B(X)
introduced in Section 4.1.
Theorem 4.2.3. Let E be an arbitrary graph. The following statements are equiv-
alent:
(iii) E is acyclic
Proof. (i)⇒(ii): This is immediate, since any von Neumann regular ring is π-regular.
(ii)⇒(iii): Suppose that LK (E) is π-regular and that there exists a cycle c based
at a vertex v in E. Let x = v + c ∈ LK (E). Since LK (E) is π-regular, there exists a
y ∈ LK (E) and n ∈ N such that xn yxn = xn . Note that xv = x = vx and so, letting
a = vyv, we have xn axn = xn (vyv)xn = xn yxn = xn . Now break a into its graded
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 137
components, so that
t
X
a= ai ,
i=s
where s, t ∈ Z, as 6= 0, at =
6 0 and deg ai = i for all s ≤ i ≤ t. Now vav =
v(vyv)v = vyv = a, and so i=s vai v = ti=s ai . Since deg(v) = 0, equating graded
Pt P
Since deg(c) > 0, we have deg(ck ) > 0 for all 1 ≤ k ≤ n, and so the lowest-degree
term on the left-hand side is vas v. Since the term of lowest degree on the right-hand
side is v, we have vas v = v and thus as = v. This implies s = 0, and so we can
write a = ti=0 ai , with a0 = v. Now suppose that c is a cycle of length m, so that
P
deg(ck ) = km. With the exception of the first term, every term on the right-hand
side contains a power of c, and so every term on the right-hand side is of degree km,
where 0 ≤ k ≤ n. Note that on the left-hand side, the leftmost terms of each bracket
Pt Pt
multiply to give v i=0 ai v = i=0 ai , and so each ai appears in the expansion
of the left-hand side. Thus, equating terms of equal degree on both sides, we have
that ai 6= 0 only if i = km for some 0 ≤ k ≤ n.
We now use induction to establish that akm = fk (c) for each 0 ≤ k ≤ n, where
fk (c) is a polynomial in c with integer coefficients. For k = 0, we know that a0 =
v = c0 , as required. For k = 1, we equate components of degree m on both sides of
(∗), giving
n n n
vam v + ca0 + a0 c= c
1 1 1
and so, since a0 = v, we have am + nc + nc = nc. Thus am = −nc, which is
certainly a polynomial in c with integer coefficients. Now suppose l > 1 and suppose
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 138
that akm = fk (c), where fk (c) is a polynomial in c with integer coefficients, for all
0 ≤ k < l. We now equate terms of degree lm on both sides of (*), giving
n n n 2 n l−1
alm + c a(l−1)m + a(l−2)m c + a(l−3)m c + · · · + a0 c
1 1 2 l−1
n 2 n n l−2
+ c a(l−2)m + a(l−3)m c + · · · + a0 c
2 2 l−2
n 3 n n l−3 n l
+ c a(l−3)m + a(l−4)m c + · · · + a0 c + ··· + c a0
3 1 l−3 l
n l
= c.
l
By our induction hypothesis, am , . . . , a(l−1)m are all polynomials in c with integer
coefficients and so, rearranging the above equation for alm , it is clear that alm is a
polynomial in c with integer coefficients.
So we can conclude that for every nonzero homogeneous component ai of a, we
have ai c = cai , and so ac = ca. Thus
Let i be maximal with respect to the property ai (v + c)2n 6= 0. (We know such
an i exists, since a0 (v + c)2n = (v + c)2n 6= 0.) Thus the term of maximum degree
of a(v + c)2n is ai c2n , with degree i + 2nm, while the term of maximum degree of
(v + c)n is cn , with degree nm. This contradiction shows that c cannot exist, and so
E must be acyclic.
e ∈ F , and that these vertices only emit edges to their range vertices r(e) or to other
vertices of the form f ∈ F (in the case that ef forms a path in E). Since F is finite,
EF must therefore be row-finite (and finite, as noted earlier). Thus, by Lemma 2.2.9
we have LK (EF ) ∼
Ll
= Mm (K) for some mi , . . . , ml ∈ N. Now, by Lemma 4.1.5,
i=1 i
(v)⇒(ii): Let x ∈ LK (E). Since LK (E) has local units, x ∈ eLK (E)e for some
idempotent e ∈ LK (E). If LK (E) is strongly π-regular, then by Lemma 4.2.2 we
have that eLK (E)e is strongly π-regular. Since eLK (E)e is unital, we can apply [CY,
Lemma 6], and so there exists an element y ∈ eLK (E)e and n ∈ N such that xy = yx
and xn+1 y = xn = yxn+1 . Thus xn = xn+1 y = (xn )xy = (xn+1 y)xy = xn+2 y 2 , since
x and y commute. Repeating this process, we get
Example 4.2.4. We now apply Theorem 4.2.3 to our familiar examples of Leavitt
path algebras.
Proposition 4.3.1. Let R be a ring with local units. If R is right weakly regular,
then every two-sided ideal I of R is right weakly regular and the quotient R/I is
right weakly regular. On the other hand, if R contains a two-sided ideal I such that
both I and R/I are right weakly regular, then R is also right weakly regular.
Proof. Suppose that R is right weakly regular. Let I be a two-sided ideal of R and
let J be a right ideal of I. Clearly J 2 ⊆ J, so it suffices to show that a ∈ J 2 for any
a ∈ J. Now, aR is a right ideal of R, and so aR = (aR)2 = aRaR ⊆ aI, since I is a
two-sided ideal. Furthermore, since R has local units, a = ae for some idempotent
e ∈ R. Thus a = ae ∈ aR = (aR)2 = (aR)4 ⊆ (aI)2 ⊆ J 2 , as required. Now, any
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 141
right ideal of R/I is of the form M/I, where M is a right ideal of R containing I.
Thus (M/I)2 = M 2 /I = M/I, and so R/I is right weakly regular.
Now suppose that R contains a two-sided ideal I such that both I and R/I
are right weakly regular. Let J be a right ideal of R and let a ∈ J. Again, let
e be a local unit for a, so that a = ae ∈ aR. Since R/I is right weakly regular
we have (aR)2 /I = (aR/I)2 = aR/I, and so there must exist b ∈ (aR)2 such that
b + I = a + I, i.e. (a − b) ∈ I. Since (a − b)R ⊆ I and I is right weakly regular, we
have (a − b) ∈ (a − b)R = ((a − b)R)2 . Furthermore, since b ∈ (aR)2 = aRaR we
have b = ag for some g ∈ RaR, and thus (a − b)R = (ae − ag)R = a(e − g)R ⊆ aR.
Thus a = (a − b) + b ∈ ((a − b)R)2 + (aR)2 ⊆ (aR)2 + (aR)2 ⊆ (aR)2 ⊆ J 2 . Thus
J ⊆ J 2 and so R is right weakly regular.
The following proposition gives two useful equivalences to the property that R
is right weakly regular. The equivalence (i) ⇐⇒ (ii) is from [Ram, Proposition 1],
while the equivalence (ii) ⇐⇒ (iii) is from [ARM2, Theorem 3.1].
Proposition 4.3.2. Let R be a ring with local units. The following statements are
equivalent:
(iii) For every two-sided ideal I of R, the left R-module R/I is flat.
Proof. (i)⇒(ii): Suppose that R is right weakly regular. Then, for any a ∈ R, we
have aR = aRaR. Since R has local units, a ∈ aR = aRaR, and so there exists
x ∈ RaR such that a = ax.
(ii)⇒(iii): Let I be a two-sided ideal of R. Since R has local units, R is flat as
a left R-module (by Corollary 1.2.16). Thus, viewing I as a submodule of R, by
Proposition 1.2.17 it suffices to show that if Y is a right ideal of R then I ∩Y R = Y I.
Now Y I ⊆ I and Y I ⊆ Y R, so Y I ⊆ I ∩Y R. Next suppose that y ∈ I ∩Y R. By (ii),
there exists x ∈ RyR such that y = yx. Since y ∈ Y R, we have y = yx ∈ Y RRyR.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 142
(ii)⇒(i): Assume that for all a ∈ R there exists x ∈ RaR such that a = ax, and
let I be a right ideal of R. Then, for any b ∈ I, there exist ri , si ∈ R such that
Pn Pn 2 2 2
b=b i=1 ri bsi ) = i=1 bri bsi and so b ∈ I . Since I ⊆ I, we have I = I, as
required.
The following proposition from [ARM2, Proposition 3.11] shows that the prop-
erty of being right weakly regular is preserved by subrings eRe and matrix rings.
Proposition 4.3.3. Let R be a ring with local units. The following statements are
equivalent:
(ii) The subring eRe is right weakly regular for all idempotents e ∈ R.
(iii) The matrix ring Mn (R) is right weakly regular for all n ∈ N.
Proof. (i)⇒(ii): Suppose that R is right weakly regular and that e ∈ R is an idempo-
tent. Let eae ∈ eRe, where a ∈ R. By Proposition 4.3.2 there exists x ∈ ReaeR for
which eae = eaex. Let x = ni=1 bi (eae)ci , where each bi , ci ∈ R. Since e is an idem-
P
potent, we have eae = (eae)e = (eae ni=1 bi (eae)ci )e = eae ni=1 (ebi e)(eae)(eci e).
P P
Let y = ni=1 (ebi e)(eae)(eci e). Thus we have found an element y ∈ (eRe)eae(eRe)
P
for which eae = eaey, and so eRe is right weakly regular (by Proposition 4.3.2).
(ii)⇒(i): Let a ∈ R. Since R has local units, a ∈ eRe for some idempotent e ∈ R.
By our assumption, eRe is right weakly regular, and so there exists x ∈ (eRe)a(eRe)
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 143
for which a = ax. However, (eRe)a(eRe) ⊆ RaR, so that x ∈ RaR and thus R is
right weakly regular (again by Proposition 4.3.2).
(i)⇒(iii): This follows from the analogous result for unital rings in [Tu, Propo-
sition 20.4(ii)]. We can generalise it to rings with local units by applying Proposi-
tion 4.3.2.
(iii)⇒(i): For the case n = 1 we have Mn (R) ∼
= R, and so R must be right
weakly regular by our assumption.
Proposition 4.3.3 leads to the following theorem from [ARM2, Theorem 3.12],
which shows that the property ‘right weakly regular’ is Morita invariant.
Theorem 4.3.4. Let R and S be rings with local units that are Morita equivalent.
Then R is right weakly regular if and only if S is right weakly regular.
Proof. Suppose that R is right weakly regular. It suffices to show that eSe is right
weakly regular for every idempotent e ∈ S, since S is then right weakly regular
by Proposition 4.3.3. By Theorem 1.3.7, there exists a surjective Morita context
Pn
(R, S, N, M ). Since e ∈ S = M N , we have e = i=1 xi yi , where each xi ∈ M
Define the map φ : u Mn (R)u → eSe by φ(uAu) = e(xAyt )e. (Note that
xAyt ∈ M RN ⊆ M N = S, since M is a right R-module.) First we must check
that φ is well-defined. Suppose that A, B ∈ Mn (R) with uAu = uBu. Then
φ(uAu) = e(xAyt )e = e2 (xAyt )e2 = xyt xyt xAyt xyt xyt = xuAuyt = xuBuyt =
· · · = e(xByt )e = φ(uBu), as required. Now we show that φ is a ring homo-
morphism. Clearly φ is additive. To check the multiplicative property, consider
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 144
as required.
Now we show that φ is injective. Suppose φ(uAu) = exAyt e = 0 for some
uAu ∈ uMn (R)u. Then uAu = (yt xyt x)A(yt xyt x) = yt (exAyt e)x = 0, and so
ker(φ) = {0}, as required.
Finally, we show that φ is surjective. Consider ese = xyt sxyt ∈ eSe, where
s ∈ S. Note that yt sx is an n × n matrix, and each yi sxj ∈ N SM ⊆ N M = R,
since N is a right S-module. Thus yt sx ∈ Mn (R). Letting yt sx = C, we have
We now start to examine weakly regular rings in the context of Leavitt path
algebras. We begin by showing that, for any Leavitt path algebra, the properties
‘right weakly regular’ and ‘left weakly regular’ are in fact equivalent. The proof here
expands on the proof given in [ARM2, Theorem 3.15], (i) ⇐⇒ (iii).
Lemma 4.3.5. Let E be an arbitrary graph. Then LK (E) is right weakly regular if
and only if it is left weakly regular.
Proof. For any element α = k1 p1 q1∗ + · · · + kn pn qn∗ ∈ LK (E), where each ki ∈ K and
each pi , qi ∈ E ∗ , denote by α∗ the element
α∗ := k1 q1 p∗1 + · · · + kn qn p∗n .
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 145
It is easy to see that for any α, β ∈ LK (E) we have (αβ)∗ = β ∗ α∗ . Let I be a right
ideal of LK (E) and define I ∗ := {α∗ : α ∈ I}. If a, b ∈ I then a∗ − b∗ = (a − b)∗ ∈ I ∗ ,
since a − b ∈ I. Furthermore, if a ∈ I and x ∈ LK (E) then xa∗ = (ax∗ )∗ ∈ I ∗ , since
ax∗ ∈ I. Thus I ∗ is a left ideal of LK (E). Similarly, if I is a left ideal of LK (E)
then I ∗ is a right ideal of LK (E).
Suppose that LK (E) is right weakly regular, and consider a left ideal J of LK (E).
Then J ∗ is a right ideal of LK (E), and so (J ∗ )2 = J ∗ . Take an arbitrary element a ∈
J. Then a∗ = ni=1 x∗i yi∗ , where each xi , yi ∈ J. Thus a = (a∗ )∗ = ni=1 (x∗i yi∗ )∗ =
P P
Pn 2 2 2
i=1 yi xi ∈ J , and so J ⊆ J . Therefore J = J and so LK (E) is left weakly
We now give an example of a Leavitt path algebra that is right weakly regular.
This example is from [ARM2, Example 3.2(ii)].
•Z / •v
Since E satisfies Condition (K), [G2, Theorem 4.2] tells us that every ideal of LK (E)
is graded. Since E is row-finite, for any graded ideal I of LK (E) we have I = I(H),
where H = I ∩ E 0 (by Theorem 3.3.9). Furthermore, H is a hereditary saturated
subset of E 0 (by Lemma 2.2.1), and so the only ideals in LK (E) are those generated
by hereditary saturated subsets of E 0 . Specifically, we have precisely three ideals:
0, LK (E) and I = I({v}).
Clearly LK (E)/LK (E) is flat as a left LK (E)-module. Furthermore, LK (E)/0 =
LK (E) is flat by Corollary 1.2.16. Finally, note that Pl (E) = {v}, and so by
Theorem 3.2.11 we have soc(LK (E)) = I. Now, [ARM2, Corollary 2.24] states that
if R is a semiprime ring with local units then R/ soc(R) is flat as a left R-module.
Since LK (E) is semiprime (by Proposition 3.2.1), LK (E)/I is flat. Thus we can
apply Proposition 4.3.2 (iii)⇒(i) to obtain that LK (E) is right weakly regular.
Not every Leavitt path algebra is right weakly regular, as the following examples
(from [ARM2, Example 3.3]) illustrate.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 146
u
E: 6•
u / •v
F : 6•
We now begin to work our way towards Proposition 4.3.10, which shows that
any graded ideal of a Leavitt path algebra is itself isomorphic to a Leavitt path
algebra. This result, while being interesting in its own right, will also be useful
when determining which Leavitt path algebras are right weakly regular. To begin,
we need the following definition.
In other words, FE (H, S) is the set F̃E (H, S) with all paths of length one going
directly from S to H removed.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 147
We can use the set FE (H, S) to construct a new graph H ES . First, we create
a copy of FE (H, S) and denote this by F̄E (H, S) = {ᾱ : α ∈ FE (H, S)}. Then we
define the graph H ES = (H ES0 , H ES1 , s0 , r0 ) as follows:
0
H ES := H ∪ S ∪ FE (H, S).
1
H ES := {e ∈ E 1 : s(e) ∈ H} ∪ {e ∈ E 1 : s(e) ∈ S and r(e) ∈ H} ∪ F̄E (H, S).
We now note some properties of the graph H ES . First, note that H ES contains
the restriction graph
0
Note also that every vertex in S ⊆ H ES is an infinite emitter, emitting an infinite
number of edges into H and no other edges. On the other hand, each vertex α ∈
FE (H, S) ⊆ H ES0 is by definition a source that emits exactly one edge ᾱ with range
in H ∪ S. Moreover, since H is hereditary, if a cycle c in H ES contains a vertex in
H then all vertices of c must be in H. Thus any cycle in the graph H ES must come
from the restriction graph EH . These properties will prove useful in the proof of
Proposition 4.3.10. However, we first give an example to illustrate the construction
of H ES .
where the (∞) symbol indicates that there are infinitely many edges from u1 to v
and from u2 to v. Let H = {v} (which is clearly a hereditary and saturated subset
of E 0 ), giving BH = {u1 , u2 }. Furthermore, let S = BH . Then FE (H, S) = {e1 , e2 },
and so H ES is the graph
•e1 •e2
ē1 ē2
•u1GG •u2
GG ww
GG
G www
(∞) GG# ww
{ww (∞)
•v
Recall from Theorem 3.3.9 that any graded ideal I of LK (E) is generated by
the hereditary saturated subset H = I ∩ E 0 and the set {v H : v ∈ S}, where
S = {w ∈ BH : wH ∈ I}. We denote this by I = I(H,S) .
The following proposition is from [ARM2, Proposition 3.7], which is the algebraic
analogue of [DHS, Lemma 1.6] and a generalisation of [AP, Lemma 1.2] to arbitrary
graphs. However, when examining this proposition the author discovered an error
that leaves the proof incomplete. Furthermore, the proof of [DHS, Lemma 1.6] was
discovered to contain a similar error. At the time of writing, these errors are yet
to be resolved. We will mention these problems when they arise in the proof and
show that they can be avoided in the row-finite case (so that [AP, Lemma 1.2] is
still valid).
Proposition 4.3.10. Let E be an arbitrary graph. For any graded ideal I = I(H,S) of
the Leavitt path algebra LK (E), there exists an isomorphism φ : LK (H ES ) → I(H,S) .
if v ∈ S then v is an infinite emitter and so the (CK2) relation does not apply.
Case 1: v ∈ H. Note that every edge emitted by v in E 1 is contained in the
restriction graph EH and is therefore in H ES1 . Thus s0 (e)=v ee∗ = s(e)=v ee∗ , and
P P
Case 2: v = α ∈ FE (H, S) with r(α) ∈ H. Then α only emits the edge ᾱ, and
so φ(α − ᾱᾱ∗ ) = αα∗ − αα∗ = 0.
Case 3: v = α ∈ FE (H, S) with r(α) ∈ S. Again, α only emits the edge ᾱ, and
so φ(α− ᾱᾱ∗ ) = αr(α)H α∗ −(αr(α)H )(r(α)H )α∗ ) = 0, since r(α)H is an idempotent.
Thus the (CK2) relation is preserved by φ.
On the other hand, the proof of [DHS, Lemma 1.6] appears to get around this
problem by writing f1 . . . fn as a concatenation of subpaths α1 . . . αk , where the
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 151
final edge (and only the final edge) of each α1 , . . . , αk−1 has range in S. Since
s(fi ) ∈
/ H for any i = 1, . . . , n (by the minimality of n), each αi ∈ FE (H, S) for
i = 1, . . . , k − 1. Furthermore, either αk ∈ FE (H, S) or αk is a single edge from S to
H, in which case φ(αk ) = αk . The proof asserts that we therefore have either α =
φ(α¯1 ) . . . φ(α¯k )φ(fn+1 ) . . . φ(fm ) (in the former case) or α = φ(α¯1 ) . . . φ(αk−1
¯ )φ(αk )
φ(fn+1 ) . . . φ(fm ) (in the latter case). Aside from the fact that φ(ᾱi ) = αi r(αi )H
rather than simply αi , the most significant problem is that α¯1 . . . α¯k is not a nonzero
element in LK (H ES ), since it is impossible for two edges β¯1 , β¯2 ∈ F̄E (H, S) ⊆ H E 1 S
to be adjacent. (Recall that for any edge β ∈ F̄E (H, S), we define s(β̄) = β, which
is a source in our graph H ES by definition.) We refer again to Example 4.3.9, in
which e1 , e2 are adjacent edges in our graph E, while e¯1 , e¯2 are not:
e2
*
•u5 1 j •u 2 •e1 •e2
555
55 e1
ē1 ē2
55
55
E: (∞) 5
(∞) H ES : •uG1G •u2
55
GG
GG ww
w
55 ww
5
G
(∞) GG# ww
{ww (∞)
•v •v
However, in the case that E is row-finite the proof simplifies greatly and it is
possible to show that φ is an epimorphism, as we now show. Note that if E is
row-finite there are no breaking vertices and so S = ∅. Thus the set FE (H, S) is
simply the set of all positive paths α = e1 . . . en for which each ei ∈ E 1 , r(α) ∈ H
and s(ei ) ∈
/ H for each i = 1, . . . n. Furthermore, I(H,S) = I(H), which is generated
by elements of the form αβ ∗ , with r(α) = r(β) ∈ H. As above, to show that φ is
an epimorphism it suffices to find an inverse image for α = f1 . . . fm with r(α) ∈ H.
If s(α) ∈ H, then α = φ(α), as was shown in the more general case. Suppose
s(α) ∈
/ H and let n be the smallest integer such that 1 < n ≤ m and r(fn ) ∈ H. If
n < m, then α1 = f1 . . . fn ∈ FE (H, S), while s(fi ) ∈ H for each i = n + 1, . . . , m.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 152
While we have only proved that Proposition 4.3.10 holds in the case that E
is row-finite, we will proceed as in [ARM2] and assume that the following results,
some of which rely on Proposition 4.3.10, hold for an arbitrary graph E (unless
stated otherwise). As a side note, [ARM2, Proposition 3.7] states that φ is a graded
isomorphism, which is not necessarily true. To see this, recall that φ(ᾱ) = α for
all ᾱ ∈ F̄E (H, S) with r(α) ∈ H. Now, ᾱ is an element of degree 1 in LK (H ES ),
1
since ᾱ ∈ H ES , whereas α is an element of degree l(α) in LK (E), and l(α) is not
necessarily 1. However, this observation does not affect any subsequent results.
We now proceed to work our way toward the main theorem of this section, The-
orem 4.3.15. To begin, we give the following useful theorem, which is a combination
of results from Tomforde [To] and Goodearl [G2]. Recall that a ring R is said to be
an exchange ring if, given any element x ∈ R, there exists an idempotent e ∈ xR
such that e = x + s − xs for some x ∈ R. Note that if R is unital then we have
1 − e = 1 − (x + s − xs) = (1 − x)(1 − s) ∈ (1 − x)R, and so this definition is
consistent with the more familiar unital definition.
Theorem 4.3.11. Let E be an arbitrary graph. The following statements are equiv-
alent:
Proof. (i) ⇐⇒ (iii) is from [To, Theorem 6.16], while (ii) ⇐⇒ (iii) is from [G2,
Theorem 4.2].
The following proposition from [ARM2, Proposition 3.9] shows that the converse
of Proposition 4.3.12 is true in the row-finite case.
Proposition 4.3.13. Let E be a row-finite graph. If the Leavitt path algebra LK (E)
is right weakly regular, then the graph E satisfies Condition (K).
Proof. We begin by showing that if LK (E) is right weakly regular then every cycle in
E has an exit. Suppose, by way of contradiction, that there exists a cycle c without
exits in E, and let H be the hereditary saturated closure of the vertices of c. By
[AAPS, Proposition 3.6(iii)] we have I(H) ∼
= Mn (K[x, x−1 ]) for some n ∈ N ∪ {∞}.
Now, since LK (E) is right weakly regular, so too is I(H) (by Proposition 4.3.1),
and thus Mn (K[x, x−1 ]) is right weakly regular. Consider E11 ∈ Mn (K[x, x−1 ]),
the matrix unit with 1 in the (1, 1) position and zeros elsewhere. Since E11 is an
idempotent, we have that E11 Mn (K[x, x−1 ])E11 is right weakly regular by Propo-
sition 4.3.3. Note that E11 Mn (K[x, x−1 ])E11 consists of those matrices for which
the only nonzero entry is in the (1, 1) position, and so is isomorphic to K[x, x−1 ].
However, we know that K[x, x−1 ] is not right weakly regular (see Example 4.3.7), a
contradiction, and so E contains no cycles without exits.
Now we show that if LK (E) is right weakly regular then E must satisfy Condition
(K). We proceed in a similar manner to the proof of Lemma 2.3.4: suppose, by way
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 154
of contradiction, that there exists a v ∈ E 0 such that CSP (v) = {p}. If p is not a
cycle, it is easy to see that there exists a cycle based at v whose edges are a subset
of the edges of p, contradicting the fact that CSP (v) = {p}. Thus p is a cycle and
so, by the above paragraph, there must exist exits e1 , . . . , em for p.
Let A be the set of all vertices in p. Now r(ei ) ∈
/ A for any i = 1, . . . , m, for
otherwise we would have another closed simple path based at v distinct from p. Let
X = {r(ei ) : i = 1, . . . , m} and let H be the hereditary saturated closure of X.
Recall the definition of Gn (X) from Lemma 1.4.9. Suppose that A ∩ H 6= ∅, and let
n be the minimum natural number for which A ∩ Gn (X) 6= ∅.
Let w ∈ A ∩ Gn (X) and suppose that n > 0. By the minimality of n, we have
w∈
/ Gn−1 (X). Thus, by the definition of Gn (X), w must be a regular vertex and
r(s−1 (w)) ⊆ Gn−1 (X), so that w only emits edges into Gn−1 (X). Since w is a
vertex in p, there must exist an edge f such that s(f ) = w and r(f ) ∈ A. Thus
r(f ) ∈ A ∩ Gn−1 (X), contradicting the minimality of n. Therefore we must have
n = 0, and so w ∈ G0 (X) = T (X) (by definition). Thus, for some i = 1, . . . , m,
there is a path q from r(ei ) to w. Since w is in the cycle p, and ei is an exit for p,
there must also be a path p0 from w to r(ei ), and so p0 q is a closed path based at w.
However, this implies that |CSP (v)| ≥ 2, a contradiction.
Thus H ∩ A = ∅, and in particular H 6= E 0 . Since E is row-finite, BH = ∅ and
so we have
Using the fact that right weakly regular is a Morita invariant property, we can
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 155
Proof. Suppose that E satisfies Condition (K). Then by Proposition 4.3.12, LK (E)
is right weakly regular.
Conversely, suppose that LK (E) is right weakly regular. Since E is countable, we
can apply the desingularisation process (see Definition 2.4.1) to obtain a row-finite
desingularisation F of E. By Theorem 2.4.5, LK (E) and LK (F ) are Morita equiva-
lent, and so, by Theorem 4.3.4, we have that LK (F ) is right weakly regular. Since
F is row-finite, this implies that F satisfies Condition (K) (by Proposition 4.3.13).
Thus LK (F ) is an exchange ring by Theorem 4.3.11, and since the exchange prop-
erty is a Morita invariant for rings with local units (see [AGS, Theorem 2.1]), LK (E)
is also an exchange ring. Finally, this implies that E satisfies Condition (K), by
Theorem 4.3.11.
We now come to the main theorem of this section (from [ARM2, Theorem 3.15]),
which summarises the results we have seen thus far.
Theorem 4.3.15. Let E be an arbitrary graph. The following statements are equiv-
alent:
(i) The Leavitt path algebra LK (E) is a right weakly regular ring.
(iii) The Leavitt path algebra LK (E) is a left weakly regular ring.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 156
Proof. The equivalences (i) ⇐⇒ (iv) ⇐⇒ (v) are from Theorem 4.3.11, while
(i) ⇐⇒ (iii) comes from Lemma 4.3.5.
(i)⇒(ii): This generalisation of Proposition 4.3.14 comes from [ARM2, Theorem
3.15], as mentioned above.
(ii)⇒(vi): If E satisfies Condition (K) then by [G2, Theorem 3.8] every ideal
of LK (E) is graded. Thus every ideal of LK (E) is isomorphic to a Leavitt path
algebra, by Proposition 4.3.10.
(vi)⇒(vii): This is immediate, since every Leavitt path algebra has local units.
(vii)⇒(i): Suppose that every ideal of LK (E) has local units and consider an
arbitrary element a ∈ LK (E). Since LK (E) has local units, a = eae for some
idempotent e ∈ LK (E), and so a ∈ LK (E)aLK (E). Since LK (E)aLK (E) is a two-
sided ideal, it has local units, and so there exists u ∈ LK (E)aLK (E) for which
a = au. Thus, by Proposition 4.3.2, LK (E) is right weakly regular, as required.
Example 4.3.16. We now apply Theorem 4.3.15 to our familiar examples of Leavitt
path algebras to determine if they are weakly regular.
(i) The finite line graph Mn . Since Mn is acyclic, it satisfies Condition (K), and
so LK (Mn ) ∼
= Mn (K) is both left and right weakly regular for all n ∈ N.
(ii) The rose with n leaves Rn . For n = 1, the vertex v in R1 is the base of
exactly one closed simple path, and so R1 does not satisfy Condition (K). Thus
LK (R1 ) ∼
= K[x, x−1 ] is not left or right weakly regular, confirming what we saw in
Example 4.3.7. However, for n > 1 the graph Rn does satisfy Condition (K), and
so LK (Rn ) ∼
= L(1, n) is both left and right weakly regular.
Proposition 4.4.1. Let E be an arbitrary graph. If LK (E) is left (or right) self-
injective then LK (E) is von Neumann regular and the graph E is acyclic.
Proof. Let e ∈ LK (E) be an idempotent, and recall that LK (E)e is a direct sum-
mand of LK (E) (by Lemma 1.2.3 (i)). Since LK (E) is injective as a left LK (E)-
module, so too is LK (E)e (by Lemma 1.2.12) and thus, by [L2, Theorem 13.1],
we have that EndL (E) (LK (E)e) is left self-injective. Since EndL (E) (LK (E)e) ∼
K K =
Op Op
(eLK (E)e) (by Lemma 1.2.2), (eLK (E)e) is therefore left self-injective. Thus,
by [L2, Corollary 13.2(2)] we have that (eLK (E)e)Op /J((eLK (E)e)Op ) is von Neu-
mann regular. Note that if a ring ROp is von Neumann regular then, for any a ∈ R,
there exists an x ∈ R such that a = a · x · a = axa, and so R is also von Neumann
regular. In particular, we have that eLK (E)e/J(eLK (E)e) is von Neumann regular.
Now, by [J2, Proposition 3.7.1], we have J(eLK (E)e) = eJ(LK (E))e. However,
J(LK (E)) = {0} (by Corollary 3.3.11) and so J(eLK (E)e) = {0}. Thus we have
eLK (E)e/J(eLK (E)e) = eLK (E)e and so eLK (E)e is von Neumann regular for any
idempotent e ∈ R.
Let x ∈ LK (E). Since LK (E) has local units, there exists an idempotent f ∈ R
such that x ∈ f LK (E)f . Since f LK (E)f is von Neumann regular, there exists
y ∈ f LK (E)f such that x = yxy, and so LK (E) is von Neumann regular. Finally,
by Theorem 4.2.3, E must be acyclic.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 158
In Proposition 4.4.4 we give the somewhat surprising result that if a Leavitt path
algebra LK (E) is left (or right) self-injective then the corresponding graph E must
be row-finite. This is the first time in this thesis we have seen a property of LK (E)
imply row-finiteness on E. To set up this proposition, we first give two preliminary
results. The first of these results requires the following definition.
Suppose that V is a left vector space over a division ring D. The dual vec-
tor space of V , denoted V ∗ , is the set of homomorphisms from V to D; that
is, HomD (V, D). Furthermore, V ∗ is a right vector space over D. The following
theorem, known as the ‘Erdös-Kaplansky Theorem’, gives a formulation for the di-
mension of V ∗ . This theorem is given as Theorem 2 on p. 237 of Jacobson’s [J1] and
Exercise 7.3(d) of Bourbaki’s [Bo].
Theorem 4.4.2 (The Erdös-Kaplansky Theorem). Let V be a left vector space with
infinite basis {bi : i ∈ I} over a division ring D. Then the dimension of V ∗ as a
right vector space over D is given by
Lemma 4.4.3. Let E be an arbitrary graph and let X be a set of independent paths
in E. Then the set of left ideals {LK (E)pp∗ : p ∈ X} is LK (E)-independent – that
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 159
is, LK (E)pp∗ ∩ LK (E)qq ∗ = {0} for all p ∈ X or, equivalently, that the
P
q∈X,q6=p
With these two preliminary results established we can now prove the following
result from [ARM2, Proposition 4.4].
Proposition 4.4.4. If a Leavitt path algebra LK (E) is left (or right) self-injective,
then the graph E must be row-finite.
For the first part of this proof, we wish to find a subset X of Y with cardinality
σ such that the set of left ideals {LK (E)pp∗ : p ∈ X} is LK (E)-independent. First
note that, for each n ∈ N, the set {LK (E)pp∗ : p ∈ Yn } is LK (E)-independent.
To see this, note that all paths in Yn are of length n, so that no path in Yn is an
initial subpath of any other path in Yn . Thus the result follows from Lemma 4.4.3.
Therefore, if αn = |Yn | = σ for some n ∈ N, we can choose X = Yn .
If not, then we must have αn < σ for all n ∈ N. Note that it is not always
the case that αn+1 > αn , since not every path in Yn is necessarily a subpath of a
path in Yn+1 . Thus we define a strictly increasing subsequence {αin : n < ω} as
follows: let αi1 = α1 , and define i2 to be the smallest integer for which αi2 > αi1 .
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 160
In general, if αin is chosen for some n, we define in+1 to be the smallest integer for
which αin+1 > αin . Note that, since αi1 = α1 is infinite, this is a sequence of strictly
increasing infinite cardinalities.
(2)
T2 = {p2 eβ : β < αi2 } ∪ (T1 \{q : q is an initial subpath of p2 }).
Note that the removal of the set {q : q is an initial subpath of p2 } ensures that T2
is also a set of independent paths.
Now let k ∈ N and suppose that Ti has been defined (and is a set of independent
paths of length at most ij ) for all j ≤ k. As above, there must exist a path
pk+1 ∈ Yik+1 −1 such that r(pk+1 ) emits αik+1 edges. Again, let r(pk+1 ) = vk+1
(k+1)
and let s−1 (vk+1 ) = {eβ : β < αik+1 }. Now we define
(k+1)
Tk+1 = {pk+1 eβ : β < αik+1 } ∪ (Tk \{q : q is an initial subpath of pk+1 }).
Again, the removal of the set {q : q is an initial subpath of pk+1 } ensures that Tk+1
is a set of independent paths. Thus Tn is defined (and is a set of independent paths)
for all n ∈ N. Furthermore, for any n ∈ N, {LK (E)pp∗ : p ∈ Tn } is an LK (E)-
independent set of left ideals, by Lemma 4.4.3. Note also that each Tn is a set of
paths of length in or less.
S
However, it may not necessarily be the case that T = n<ω Tn is a set of inde-
pendent set of paths, since for example a path in T2 may still be an initial subpath
of p4 . Thus, for each n ∈ N, we define
Now, define
X M
S= LK (E)pp∗ = LK (E)pp∗ ⊆ LK (E)v.
p∈X p∈X
We know that LK (E)v is a direct summand of LK (E) (by Lemma 2.1.9), and so since
LK (E) is injective as a left LK (E)-module, so too is LK (E)v (by Lemma 1.2.12).
Consider the inclusion map φ : S → LK (E)v and let f ∈ HomLK (E) (S, LK (E)v).
Since LK (E)v is injective, there exists h ∈ HomLK (E) (LK (E)v, LK (E)v) such that
the following diagram commutes:
φ
0 /S / LK (E)v
f
h
LK (E)v
by φ∗ (g) = gφ for all g ∈ HomLK (E) (LK (E)v, LK (E)v), then φ∗ is an epimorphism.
Then we have
the final isomorphism coming from Proposition 1.2.4. Now, for each k ∈ K and a
(i) (i)
fixed i ∈ I, we can define λk ∈ HomLK (E) LK (E)pp∗ , p∈X LK (E)pp∗ by λk (x) =
L
(i)
(wj )j∈I , where wi = kx and wj = 0 for j 6= i. Thus, setting F (i) = {λk : k ∈ K},
we have F (i) ∼= K. Therefore
Y
Fp(i) ∼
Y M Y
∗ ∗
HomLK (E) LK (E)pp , LK (E)pp ⊇ = Kp ,
p∈X p∈X p∈X p∈X
(i)
where each Fp = F (i) and Kp = K. Now, by the Erdös-Kaplansky Theorem,
card(X)
= card(K)σ and so, by the above inequal-
Q
p∈X Kp has K-dimension card(K)
ities, HomLK (E) (S, LK (E)v) has K-dimension ≥ card(K)σ . However, by Lemma
1.2.2, HomLK (E) (LK (E)v, LK (E)v) ∼
= vLK (E)v, which has K-dimension ≤ σ <
card(K)σ , as observed earlier. This contradicts the fact that φ∗ is an epimorphism,
and so E must be row-finite.
Proposition 4.4.5. Let LK (E) be a left (resp. right) self-injective Leavitt path alge-
bra, and let a be an arbitrary element of LK (E). Then the left ideal LK (E)a (resp.
right ideal aLK (E)) cannot contain an infinite set of LK (E)-independent left (resp.
right) ideals of LK (E).
Proof. If LK (E) is left self-injective, then by Proposition 4.4.4 the graph E must be
row-finite. Let a ∈ LK (E). Write a = nj=1 kj pj qj∗ , where pj , qj ∈ E ∗ and kj ∈ K,
P
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 163
P
and let V = {s(pj ), s(qj ) : j = 1, . . . , n}. By Lemma 2.1.12, e = v∈V v is a local
unit for a, and in particular we have LK (E)a ⊆ LK (E)e. We show that LK (E)e has
finite uniform dimension.
By way of contradiction, suppose that LK (E)e contains an infinite family of
independent submodules {Ai : i ∈ I}, where I is an infinite index set, and let
S = i∈I Ai . Note that every element of eLK (E)e is of the form m ∗
L P
j=1 lj aj bj , where
L
s(aj ), s(bj ) ∈ V for each j = 1, . . . , m. Thus eLK (E)e = v∈V vLK (E)v. For any
v ∈ V , the cardinality of the set of paths of a fixed length n beginning with v must
be finite (since E is row-finite), so the cardinality of the set of all paths of finite
length beginning with v is at most countably infinite. Since vLK (E)v is generated
by finite paths beginning with v, the K-dimension of vLK (E)v is at most countable,
and thus the K-dimension of eLK (E)e is at most countable.
We now proceed as in the proof of Proposition 4.4.4. Using a similar argument,
we can show HomLK (E) (S, LK (E)e) ⊇ i∈I Fi , where each Fi ∼
Q
= K. Furthermore,
since LK (E) is left self-injective, the direct summand LK (E)e is an injective left
LK (E)-module, and so again we have an epimorphism
Proposition 4.4.6. For any graph E, if the Leavitt path algebra LK (E) is left (or
right) self-injective, then every infinite path in E contains a line point.
Proof. Suppose that γ is an infinite path in E that contains no line points. Now,
since LK (E) is left self-injective, E must be acyclic, by Proposition 4.4.1. Thus γ
must contain an infinite number of bifurcation vertices {vi : i = 1, 2, 3, . . .}, and so
we can write γ as a concatenation of a series of countably many paths γ1 γ2 γ3 . . .,
where r(γi ) = vi for each i = 1, 2, 3, . . .. Furthermore, let s(γ) = v.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 164
∗
0 6= γ1 γ2 . . . γn fn = pn γ1 γ2 . . . γn fn = xpn+1 γ1 γ2 . . . γn fn = xγ1 γ2 . . . γn+1 γn+1 f = 0,
(pj − pj+1 )(pi − pi+1 ) = pj pi − pj pi+1 − pj+1 pi + pj+1 pi+1 = pj − pj − pj+1 + pj+1 = 0.
We now come to the main result of this section, which is from [ARM2, Theorem
4.7].
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 165
Theorem 4.4.7. Let E be an arbitrary graph and let K be any field. The following
statements are equivalent:
(iii) The graph E is row-finite, acyclic and every infinite path contains a line point.
Proof. (i)⇒(iii): This follows directly from Propositions 4.4.1, 4.4.4 and 4.4.6.
Example 4.4.8. We now apply Theorem 4.4.7 to our familiar examples of Leavitt
path algebras to determine if they are self-injective.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 166
(i) The finite line graph Mn . Since Mn is row-finite, acyclic and contains no
infinite paths, LK (Mn ) ∼= Mn (K) is both left and right self-injective (and also
semisimple) for all n ∈ N.
(ii) The rose with n leaves Rn . For each n ∈ N, Rn contains n cycles and so
LK (Rn ) ∼
= L(1, n) is neither left nor right self-injective.
(iii) The infinite clock graph C∞ . Since C∞ is not row-finite, we have that
LK (C∞ ) ∼
L∞
= i=1 M2 (K) ⊕ KI22 is neither left nor right self-injective.
Appendix A
Direct Limits
Definition A.1.1. Let (Ri , ϕij )I be a direct system of rings and let R be a ring for
which there exists a ring homomorphism ϕi : Ri → R for each i ∈ I. We say that
(R, ϕi ), or simply R, is a direct limit of the system if the following two conditions
are satisfied:
(i) For each pair i, j ∈ I with i ≤ j, we have ϕi = ϕj ϕij ; that is, the following
167
APPENDIX A. DIRECT LIMITS 168
diagram commutes:
ϕij
Ri @ / Rj
@@ ~~
@@
ϕi @@@ ~~~ϕ
~~ ~
j
ϕi
Ri @ /R
@@
@@
µi @@ µ
S
Now suppose that (R̄, ϕ̄i ) is another ring and set of ring homomorphisms that
satisfy conditions (i) and (ii). Then there exists a unique homomorphism µ : R → R̄
such that ϕ̄i = µϕi for all i ∈ I. Similarly, there exists a unique homomorphism
µ0 : R̄ → R such that ϕi = µ0 ϕ̄i for all i ∈ I. Thus we have ϕi = µ0 µϕi , giving
(by the uniqueness) µ0 µ = 1R , and ϕ̄i = µµ0 ϕ̄i , giving µµ0 = 1R̄ . Thus µ is an
isomorphism and so R ∼ = R̄. A direct limit is therefore unique up to isomorphism,
and so we can uniquely denote this limit by −
lim
→(Ri , ϕij ).
Note that if I is an upward-directed index set and {Ri : i ∈ I} is an ascending
chain of rings – that is, Ri ⊆ Ri+1 for each i ∈ I – then defining ϕij to be the
inclusion map from Ri to Rj (for each pair i, j ∈ I with i ≤ j), we have that
(Ri , ϕij )I is a direct system. In this case we usually drop the ϕij from the notation
and write the direct limit of the family as simply lim
−→ i∈I Ri . It is straightforward to
S
show that −
lim
→ i∈I Ri = i∈I Ri , the directed union of the family.
We illustrate the concept of a direct limit with the following useful example.
Let R be a ring with local units, so that there exists a set of idempotents I ⊆ R
for which, given any finite subset {x1 . . . , xn } ⊆ R, there exists e ∈ I such that
xi ∈ eRe for each i = 1, . . . , n. We define a partial ordering ≤ on I by writing
e ≤ f if e ∈ f Rf . (Note that e ≤ f is equivalent to eRe ⊆ f Rf .) Furthermore, I
APPENDIX A. DIRECT LIMITS 169
is an upward-directed set: given any pair e, f ∈ I, there must exist g ∈ I such that
e, f ∈ gRg (by the definition of local units), so that e ≤ g and f ≤ g.
Lemma A.1.2. Let R be a ring with local units. Let I be the set of local units and
let ≤ be the partial ordering defined above. For each pair e, f ∈ I with e ≤ f , define
ϕef : eRe → f Rf and ϕe : eRe → R to be the inclusion ring homomorphisms. Then
R = lim
−→(eRe, ϕef ).
Now suppose there exists a ring S and ring homomorphisms µe : eRe → S such
that µe = µf ϕef for all e, f ∈ I with e ≤ f . For any x ∈ R, choose e ∈ I such that
x ∈ eRe (such an element exists since I is a set of local units), and let µ(x) = µe (x),
thus defining a map µ : R → S. Note that our choice of e is not unique, so we must
check that this map is well-defined. Suppose there exists f ∈ I with f 6= e such
that x ∈ f Rf . Since I is an upward-directed set, there exists g ∈ I such that e ≤ g
and f ≤ g, and so
ϕe
eReC /R
CC
CC
µe CCC µ
!
S
and so ν = µ. Thus we have satisfied condition (ii) of the direct limit definition and
so R = lim
−→(Ri , ϕef ), up to isomorphism.
Bibliography
[AA1] Abrams, G., and Aranda Pino, G., The Leavitt path algebra of a graph,
J. Algebra 293(2) (2005), 319–334.
[AA2] Abrams, G., and Aranda Pino, G., Purely infinite simple Leavitt path
algebras, J. Pure Appl. Algebra 207(3) (2006), 553–563.
[AA3] Abrams, G., and Aranda Pino, G., The Leavitt path algebras of arbitrary
graphs, Houston J. Math 34(2) (2008), 423–442.
[AAPS] Abrams, G., Aranda Pino, G., Perera, F. and Siles Molina, M., Chain
conditions for Leavitt path algebras, Forum Math. 22(1) (2010), 95–114.
[AAS] Abrams, G., Aranda Pino, G., and Siles Molina, M., Finite-dimensional
Leavitt path algebras, J. Pure Appl. Algebra 209(3) (2007), 753–762.
[AR] Abrams, G., and Rangaswamy, K. M., Regularity conditions for arbitrary
Leavitt path algebras, Algebr. Represent. Theory 13 (2010), 319–334.
[ARM1] Abrams, G., Rangaswamy, K. M., and Siles Molina, M., The socle series
of a Leavitt path algebra, Israel J. Math. (to appear).
[AAMMS] Alberca Bjerregaard, P., Aranda Pino, G., Martı́n Barquero, D., Martı́n
González, C., and Siles Molina, M., Atlas of Leavitt path algebras of small
graphs (preprint).
[AM] Ánh, P.N., and Márki, L., Morita equivalence for rings without identity,
Tsukuba J. Math 11(1) (1987), 1–16.
171
BIBLIOGRAPHY 172
[AGS] Ara, P., Gómez Lozano, M., and Siles Molina, M., Local rings of exchange
rings, Comm. Algebra 26(12) (1998), 4191–4205.
[AGP] Ara, P., Goodearl, K. R., and Pardo, E., K0 of purely infinite simple
regular rings, K-Theory 26 (2002), 69–100.
[AMP] Ara, P., Moreno, M. A., and Pardo, E., Nonstable K-theory for graph
algebras, Algebra. Represent. Theory 10(2) (2007), 157–178.
[AP] Ara, P., and Pardo, E., Stable rank for graph algebras, Proc. Amer. Math.
Soc. 136(7) (2008), 2375–2386.
[A] Aranda Pino, G., On maximal left quotient systems and Leavitt path alge-
bras, Doctoral Thesis, Department of Algebra, Geometry and Topology,
University of Malaga (2005).
[AMMS1] Aranda Pino, G., Martı́n Barquero, D., Martı́n González, C., and Siles
Molina, M., The socle of a Leavitt path algebra, J. Pure Appl. Algebra
212 (2008), 500–509.
[AMMS2] Aranda Pino, G., Martı́n Barquero, D., Martı́n González, C., and Siles
Molina, M., Socle theory for Leavitt path algebras of arbitrary graphs,
Rev. Mat. Iberoamericana 26(2) (2010), 611–638.
[APS] Aranda Pino, G., Pardo, E., and Siles Molina, M., Exchange Leavitt path
algebras and stable rank, J. Algebra 305(2) (2006), 912–936.
[ARM2] Aranda Pino, G., Rangaswamy, K. M., and Siles Molina, M., Weakly reg-
ular and self-injective Leavitt path algebras over arbitrary graphs, Algebr.
Represent. Theor. (to appear).
[BPRS] Bates, T., Pask, D., Raeburn, I., and Szymański, W., The C ∗ -algebras
of row-finite graphs, New York J. Math. 6 (2000), 307–324.
[CY] Camillo, V., and Yu, H.P., Stable range one for rings with many idem-
potents, Trans. Amer. Math. Soc. 347(8) (1995), 3141–3147.
[DHS] Deicke, K., Hong, J. H., and Szymański, W., Stable rank of graph al-
gebras: Type I graph algebras and their limits, Indiana Univ. Math. J.
52(4) (2003), 963–979.
[D] Divinsky, N. J., Rings and Radicals, George Allen and Unwin, London
(1965).
[GS] Garcı́a, J. L., and Simón, J. J., Morita equivalence for idempotent rings,
J. Pure Appl. Algebra 76 (1991), 39–56.
[G1] Goodearl, K. R., Von Neumann Regular Rings, Pitman, London (1979).
[G2] Goodearl, K. R., Leavitt path algebras and direct limits, Contemp. Math.
480(200) (2009), 165–187.
[J1] Jacobson, N., Lectures in Abstract Algebra, vol. II, Linear Algebra, van
Nostrand (1953).
[L2] Lam, T. Y., Lectures on Modules and Rings, Springer-Verlag, New York
(1999).
[Le1] Leavitt, W. G., Modules without invariant basis number, Proc. Amer.
Math. Soc. 8 (1957), 322–328.
[Le2] Leavitt, W. G., The module type of a ring, Trans. Amer. Math. Soc. 103
(1962), 113–130.
[NV] Nǎstǎsescu, C., and Van Oystaeyen, F., Graded and Filtered Rings and
Modules, Springer-Verlag, New York (1979).
[Ram] Ramamurthi, V. S., Weakly regular rings, Canad. Math. Bull., 16 (1973),
317–321.
[Rae] Raeburn, I., Graph Algebras, CMBS Reg. Conf. Ser. Math. vol. 103,
Amer. Math. Soc. Providence, RI, 2005.
[Ri] Ribenboim, P., Rings and Modules, Interscience Publishers, New York
(1969).
[To] Tomforde, M., Uniqueness theorems and ideal structure for Leavitt path
algebras, J. Algebra 318 (2007), 270–299.
[Tu] Tuganbaev, A., Rings Close to Regular, Mathematics and its Applica-
tions, 545, Kluwer Academic Publishers, Dordrecht (2002).
Index
175
INDEX 176
contravariant, 23 ideal
covariant, 23 generated by x, 2
graded, 4
generator, 26
nilpotent, 82
ghost
idempotent ring, 25
edge, 40
independent paths, 158
path, 40
infinite
graded
clock graph, 44
homomorphism, 4
emitter, 33
ideal, 4
idempotent, 13
ring, 3
initial subpath, 33
Graded Uniqueness Theorem, 60
injective module, 16
graph, 32
internal direct sum, 11
acyclic, 34
cofinal, 35 Jacobson radical, 5
countable, 33
Leavitt path algebra, 40
directed, 32
left
extended, 40
π-regular ring, 135
finite, 33
ideal
finite line, 42
maximal, 5
infinite clock, 44
minimal, 5
quotient, 95
principal, 2
rose, 43
Loewy length, 109
row-finite, 33
Loewy ring, 109
single loop, 43
socle, 81
hereditary socle series, 109
saturated closure, 36 line point, 35
subset, 35 local units, 1
homomorphism, 8 locally matricial, 56
bimodule, 9 locally projective module, 28
graded, 4 Loewy left ascending socle series, 109
INDEX 177
U-free, 20 index, 54
context, 25 ring
surjective, 25 Z-graded, 3
invariant, 24 endomorphism, 8
unital, 1 breaking, 95
von Neumann regular, 6 cofinal, 35
rose graph, 43 regular, 33
row-finite graph, 33 singular, 33
von Neumann regular ring, 6
saturated subset, 35
self-injective module, 16 weakly regular ring, 140
semiprime ring, 82
semisimple module, 82
short exact sequence, 15
simple ring, 2
single loop graph, 43
singular vertex, 33
sink, 33
socle, 81
socle series, 109
source
function, 32
vertex, 33
strongly π-regular ring, 135
submodule, 7
tensor product, 18
trace, 26
tree, 34
U-basis, 20
U-free module, 20
uniform dimension, 162
unital module, 8
upward-directed set, 167
vertex, 32