登录
首页 >  Golang >  Go问答

Goroutine 没有按预期运行

来源:stackoverflow

时间:2024-04-06 22:48:35 150浏览 收藏

小伙伴们对Golang编程感兴趣吗?是否正在学习相关知识点?如果是,那么本文《Goroutine 没有按预期运行》,就很适合你,本篇文章讲解的知识点主要包括。在之后的文章中也会多多分享相关知识点,希望对大家的知识积累有所帮助!

问题内容

我仍在学习 go,并且正在练习此处链接的网络爬虫。我实现的主要部分如下。 (其他部分保持不变,可以在链接中找到。)

// crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func crawl(url string, depth int, fetcher fetcher) {
    // todo: fetch urls in parallel.
    // todo: don't fetch the same url twice.
    // this implementation doesn't do either:
    if depth <= 0 {
        return
    }
    body, urls, err := fetcher.fetch(url)
    cache.set(url)
    if err != nil {
        fmt.println(err)
        return
    }
    fmt.printf("found: %s %q\n", url, body)

    for _, u := range urls {
        if cache.get(u) == false {
            fmt.println("next:", u)
            crawl(u, depth-1, fetcher) // i want to parallelize this
        }
    }
    return
}

func main() {
    crawl("https://golang.org/", 4, fetcher)
}

type safecache struct {
    v   map[string]bool
    mux sync.mutex
}

func (c *safecache) set(key string) {
    c.mux.lock()
    c.v[key] = true
    c.mux.unlock()
}

func (c *safecache) get(key string) bool {
    return c.v[key]
}

var cache safecache = safecache{v: make(map[string]bool)}

当我运行上面的代码时,结果是预期的:

found: https://golang.org/ "the go programming language"
next: https://golang.org/pkg/
found: https://golang.org/pkg/ "packages"
next: https://golang.org/cmd/
not found: https://golang.org/cmd/
next: https://golang.org/pkg/fmt/
found: https://golang.org/pkg/fmt/ "package fmt"
next: https://golang.org/pkg/os/
found: https://golang.org/pkg/os/ "package os"

但是,当我尝试通过将 crawl(u, depth-1, fetcher) 更改为 go crawl(u, depth-1, fetcher) 来并行化爬虫(在上面程序中带有注释的行)时,结果并不如我预期:

found: https://golang.org/ "The Go Programming Language"
Next: https://golang.org/pkg/
Next: https://golang.org/cmd/

我认为直接添加 go 关键字就像看起来一样简单,但我不确定出了什么问题,并且对如何最好地解决这个问题感到困惑。任何意见,将不胜感激。预先感谢您!


解决方案


您的程序很可能在爬虫完成工作之前退出。一种方法是让 crawl 有一个 waitgroup,它等待所有子爬网程序完成。例如

import "sync"

// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher, *wg sync.WaitGroup) {
    defer func() {
        // If the crawler was given a wait group, signal that it's finished
        if wg != nil {
            wg.Done()
        }
    }()

    if depth <= 0 {
        return
    }

    _, urls, err := fetcher.Fetch(url)
    cache.Set(url)
    if err != nil {
        fmt.Println(err)
        return
    }

    fmt.Printf("found: %s %q\n", url, body)

    var crawlers sync.WaitGroup
    for _, u := range urls {
        if cache.Get(u) == false {
            fmt.Println("Next:", u)
            crawlers.Add(1)
            go Crawl(u, depth-1, fetcher, &crawlers)
        }
    }
    crawlers.Wait() // Waits for its sub-crawlers to finish

    return 
}

func main() {
   // The root does not need a WaitGroup
   Crawl("http://example.com/index.html", 4, nil)
}

以上就是《Goroutine 没有按预期运行》的详细内容,更多关于的资料请关注golang学习网公众号!

声明:本文转载于:stackoverflow 如有侵犯,请联系study_golang@163.com删除
相关阅读
更多>
最新阅读
更多>
课程推荐
更多>