Contents
  1. 1. sync.Mutex
  2. 2. 练习:网页爬虫

sync.Mutex

Go为mutual exclusion提供sync.Mutex库的两个方法

1
2
Lock
Unlock

我们可以将一块代码用Lock和Unlock包围以实现互斥运行,还可以用defer关键字来确保mutex会被释放。

练习:网页爬虫

在练习将使用Go并发特性来实现并行爬虫。修改Crawl方法并行但不重复地获取URL。
提示,可用一个Map来保存已获取的URL,但Map本身并不是线程安全的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
package main
import (
"fmt"
)
type Fetcher interface {
// Fetch returns the body of URL and
// a slice of URLs found on that page.
Fetch(url string) (body string, urls []string, err error)
}
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher) {
// TODO: Fetch URLs in parallel.
// TODO: Don't fetch the same URL twice.
// This implementation doesn't do either:
if depth <= 0 {
return
}
body, urls, err := fetcher.Fetch(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
for _, u := range urls {
Crawl(u, depth-1, fetcher)
}
return
}
func main() {
Crawl("http://golang.org/", 4, fetcher)
}
// fakeFetcher is Fetcher that returns canned results.
type fakeFetcher map[string]*fakeResult
type fakeResult struct {
body string
urls []string
}
func (f fakeFetcher) Fetch(url string) (string, []string, error) {
if res, ok := f[url]; ok {
return res.body, res.urls, nil
}
return "", nil, fmt.Errorf("not found: %s", url)
}
// fetcher is a populated fakeFetcher.
var fetcher = fakeFetcher{
"http://golang.org/": &fakeResult{
"The Go Programming Language",
[]string{
"http://golang.org/pkg/",
"http://golang.org/cmd/",
},
},
"http://golang.org/pkg/": &fakeResult{
"Packages",
[]string{
"http://golang.org/",
"http://golang.org/cmd/",
"http://golang.org/pkg/fmt/",
"http://golang.org/pkg/os/",
},
},
"http://golang.org/pkg/fmt/": &fakeResult{
"Package fmt",
[]string{
"http://golang.org/",
"http://golang.org/pkg/",
},
},
"http://golang.org/pkg/os/": &fakeResult{
"Package os",
[]string{
"http://golang.org/",
"http://golang.org/pkg/",
},
},
}

题目很长,但是需要修改的部分只是Crawl函数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
var m = make(map[string]int)
var mtx sync.Mutex
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher) {
// TODO: Fetch URLs in parallel.
// TODO: Don't fetch the same URL twice.
// This implementation doesn't do either:
if depth <= 0 {
return
}
mtx.Lock()
if m[url] == 0 {
body, urls, err := fetcher.Fetch(url)
m[url] = 1
mtx.Unlock()
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("found: %s %q\n", url, body)
for _, u := range urls {
go Crawl(u, depth-1, fetcher)
}
}else{
mtx.Unlock()
}
return
}

在函数外部定义了一个map和mutex用来保存url是否被访问过,并确保其操作的互斥。
在访问下级url时,使用go语句实现并行。

1
2
3
4
5
found: http://golang.org/ "The Go Programming Language"
found: http://golang.org/pkg/ "Packages"
found: http://golang.org/pkg/os/ "Package os"
not found: http://golang.org/cmd/
found: http://golang.org/pkg/fmt/ "Package fmt"

至此基本的教程就已经结束了,接下来就应该开始实战提高了。

Contents
  1. 1. sync.Mutex
  2. 2. 练习:网页爬虫